ML Word To PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 229

INTRODUCTION TO MACHINE LEARNING

By PythonLife
Contents

Preface page 1
1 Introduction 3
1.1 A Taste of Machine Learning 3
1.1.1 Applications 3
1.1.2 Data 7
1.1.3 Problems 9
1.2 Probability Theory 12
1.2.1 Random Variables 12
1.2.2 Distributions 13
1.2.3 Mean and Variance 15
1.2.4 Marginalization, Independence, Conditioning, and
Bayes Rule 16
1.3 Basic Algorithms 20
1.3.1 Naive Bayes 22
1.3.2 Nearest Neighbor Estimators 24
1.3.3 A Simple Classifier 27
1.3.4 Perceptron 29
1.3.5 K-Means 32
2 Density Estimation 37
2.1 Limit Theorems 37
2.1.1 Fundamental Laws 38
2.1.2 The Characteristic Function 42
2.1.3 Tail Bounds 45
2.1.4 An Example 48
2.2 Parzen Windows 51
2.2.1 Discrete Density Estimation 51
2.2.2 Smoothing Kernel 52
2.2.3 Parameter Estimation 54
2.2.4 Silverman’s Rule 57
2.2.5 Watson-Nadaraya Estimator 59
2.3 Exponential Families 60
2.3.1 Basics 60

v
vi 0 Contents

2.3.2 Examples 62
2.4 Estimation 66
2.4.1 Maximum Likelihood Estimation 66
2.4.2 Bias, Variance and Consistency 68
2.4.3 A Bayesian Approach 71
2.4.4 An Example 75
2.5 Sampling 77
2.5.1 Inverse Transformation 78
2.5.2 Rejection Sampler 82
3 Optimization 91
3.1 Preliminaries 91
3.1.1 Convex Sets 92
3.1.2 Convex Functions 92
3.1.3 Subgradients 96
3.1.4 Strongly Convex Functions 97
3.1.5 Convex Functions with Lipschitz Continous Gradient 98
3.1.6 Fenchel Duality 98
3.1.7 Bregman Divergence 100
3.2 Unconstrained Smooth Convex Minimization 102
3.2.1 Minimizing a One-Dimensional Convex Function 102
3.2.2 Coordinate Descent 104
3.2.3 Gradient Descent 104
3.2.4 Mirror Descent 108
3.2.5 Conjugate Gradient 111
3.2.6 Higher Order Methods 115
3.2.7 Bundle Methods 121
3.3 Constrained Optimization 125
3.3.1 Projection Based Methods 125
3.3.2 Lagrange Duality 127
3.3.3 Linear and Quadratic Programs 131
3.4 Stochastic Optimization 135
3.4.1 Stochastic Gradient Descent 136
3.5 Nonconvex Optimization 137
3.5.1 Concave-Convex Procedure 137
3.6 Some Practical Advice 139
4 Online Learning and Boosting 143
4.1 Halving Algorithm 143
4.2 Weighted Majority 144
Contents vii

5 Conditional Densities 149


5.1 Logistic Regression 150
5.2 Regression 151
5.2.1 Conditionally Normal Models 151
5.2.2 Posterior Distribution 151
5.2.3 Heteroscedastic Estimation 151
5.3 Multiclass Classification 151
5.3.1 Conditionally Multinomial Models 151
5.4 What is a CRF? 152
5.4.1 Linear Chain CRFs 152
5.4.2 Higher Order CRFs 152
5.4.3 Kernelized CRFs 152
5.5 Optimization Strategies 152
5.5.1 Getting Started 152
5.5.2 Optimization Algorithms 152
5.5.3 Handling Higher order CRFs 152
5.6 Hidden Markov Models 153
5.7 Further Reading 153
5.7.1 Optimization 153
6 Kernels and Function Spaces 155
6.1 The Basics 155
6.1.1 Examples 156
6.2 Kernels 161
6.2.1 Feature Maps 161
6.2.2 The Kernel Trick 161
6.2.3 Examples of Kernels 161
6.3 Algorithms 161
6.3.1 Kernel Perceptron 161
6.3.2 Trivial Classifier 161
6.3.3 Kernel Principal Component Analysis 161
6.4 Reproducing Kernel Hilbert Spaces 161
6.4.1 Hilbert Spaces 163
6.4.2 Theoretical Properties 163
6.4.3 Regularization 163
6.5 Banach Spaces 164
6.5.1 Properties 164
6.5.2 Norms and Convex Sets 164
7 Linear Models 165
7.1 Support Vector Classification 165
viii 0 Contents

7.1.1 A Regularized Risk Minimization Viewpoint 170


7.1.2 An Exponential Family Interpretation 170
7.1.3 Specialized Algorithms for Training SVMs 172
7.2 Extensions 177
7.2.1 The ν trick 177
7.2.2 Squared Hinge Loss 179
7.2.3 Ramp Loss 180
7.3 Support Vector Regression 181
7.3.1 Incorporating General Loss Functions 184
7.3.2 Incorporating the ν Trick 186
7.4 Novelty Detection 186
7.5 Margins and Probability 189
7.6 Beyond Binary Classification 189
7.6.1 Multiclass Classification 190
7.6.2 Multilabel Classification 191
7.6.3 Ordinal Regression and Ranking 192
7.7 Large Margin Classifiers with Structure 193
7.7.1 Margin 193
7.7.2 Penalized Margin 193
7.7.3 Nonconvex Losses 193
7.8 Applications 193
7.8.1 Sequence Annotation 193
7.8.2 Matching 193
7.8.3 Ranking 193
7.8.4 Shortest Path Planning 193
7.8.5 Image Annotation 193
7.8.6 Contingency Table Loss 193
7.9 Optimization 193
7.9.1 Column Generation 193
7.9.2 Bundle Methods 193
7.9.3 Overrelaxation in the Dual 193
7.10 CRFs vs Structured Large Margin Models 194
7.10.1 Loss Function 194
7.10.2 Dual Connections 194
7.10.3 Optimization 194
Appendix 1 Linear Algebra and Functional Analysis 197
Appendix 2 Conjugate Distributions 201
Appendix 3 Loss Functions 203
Bibliography 221
Preface

Since this is a textbook we biased our selection of references towards easily


accessible work rather than the original references. While this may not be
in the interest of the inventors of these concepts, it greatly simplifies access
to those topics. Hence we encourage the reader to follow the references in
the cited works should they be interested in finding out who may claim
intellectual ownership of certain key ideas.

1
2 0 Preface

Structure of the Book

Density

Models

Conditional

Learning

Density Density

Models Models

Densities Densities

Conditional Conditional

Learning Learning

Canberra, August 2008


1

Introduction

Over the past two decades Machine Learning has become one of the main-
stays of information technology and with that, a rather central, albeit usually
hidden, part of our life. With the ever increasing amounts of data becoming
available there is good reason to believe that smart data analysis will become
even more pervasive as a necessary ingredient for technological progress.
The purpose of this chapter is to provide the reader with an overview over
the vast range of applications which have at their heart a machine learning
problem and to bring some degree of order to the zoo of problems. After that,
we will discuss some basic tools from statistics and probability theory, since
they form the language in which many machine learning problems must be
phrased to become amenable to solving. Finally, we will outline a set of fairly
basic yet effective algorithms to solve an important problem, namely that of
classification. More sophisticated tools, a discussion of more general
problems and a detailed analysis will follow in later parts of the book.

1.1 A Taste of Machine Learning


Machine learning can appear in many guises. We now discuss a number of
applications, the types of data they deal with, and finally, we formalize the
problems in a somewhat more stylized fashion. The latter is key if we want to
avoid reinventing the wheel for every new application. Instead, much of the
art of machine learning is to reduce a range of fairly disparate problems to
a set of fairly narrow prototypes. Much of the science of machine learning is
then to solve those problems and provide good guarantees for the solutions.

1.1.1 Applications
Most readers will be familiar with the concept of web page ranking. That is,
the process of submitting a query to a search engine, which then findswebpages
relevant to the query and which returns them in their order of relevance. See
e.g. Figure 1.1 for an example of the query results for “ma-chine learning”.
That is, the search engine returns a sorted list of webpages given a query. To
achieve this goal, a search engine needs to ‘know’ which

3
4 1 Introduction
Google

Web Scholar Results 1 - 10 of about 10,500,000 for machine learning. (0.06 seconds)

Machine learning - Wikipedia, the free encyclopedia Sponsored Links


As a broad subfield of artificial intelligence, machine learning is concerned with the design
and development of algorithms and techniques that allow ... Machine Learning
en.wikipedia.org/wiki/Machine_learning - 43k - Cached - Similar pages Google Sydney needs machine
learning experts. Apply today!
Machine Learning textbook www.google.com.au/jobs
Machine Learning is the study of computer algorithms that improve automatically through
experience. Applications range from datamining programs that ...
www.cs.cmu.edu/~tom/mlbook.html - 4k - Cached - Similar pages

machine learning
www.aaai.org/AITopics/html/machine.html - Similar pages

Machine Learning
A list of links to papers and other resources on machine learning.
www.machinelearning.net/ - 14k - Cached - Similar pages

Introduction to Machine Learning


This page has pointers to my draft book on Machine Learning and to its individual
chapters. They can be downloaded in Adobe Acrobat format. ...
ai.stanford.edu/~nilsson/mlbook.html - 15k - Cached - Similar pages

Fig. 1.1. The 5 top scoring webpages for the query “machine learning”

pages are relevant and which pages match the query. Such knowledge can be
gained from several sources: the link structure of webpages, their content,
the frequency with which users will follow the suggested links in a query, or
from examples of queries in combination with manually ranked webpages.
Increasingly machine learning rather than guesswork and clever engineering
is used to automate the process of designing a good search engine [RPB06].
A rather related application is collaborative filtering. Internet book-
stores such as Amazon, or video rental sites such as Netflix use this informa-
tion extensively to entice users to purchase additional goods (or rent more
movies). The problem is quite similar to the one of web page ranking. As
before, we want to obtain a sorted list (in this case of articles). The key dif-
ference is that an explicit query is missing and instead we can only use past
purchase and viewing decisions of the user to predict future viewing and
purchase habits. The key side information here are the decisions made by
similar users, hence the collaborative nature of the process. See Figure 1.2
for an example. It is clearly desirable to have an automatic system to solve
this problem, thereby avoiding guesswork and time [BK07].
An equally ill-defined problem is that of automatic translation of doc-
uments. At one extreme, we could aim at fully understanding a text before
translating it using a curated set of rules crafted by a computational linguist
well versed in the two languages we would like to translate. This is a rather
arduous task, in particular given that text is not always grammatically cor-
rect, nor is the document understanding part itself a trivial one. Instead, we
could simply use examples of translated documents, such as the proceedings
of the Canadian parliament or other multilingual entities (United Nations,
European Union, Switzerland) to learn how to translate between the two
1.1 A Taste of Machine Learning 5

languages. In other words, we could use examples of translations to learn


how to translate. This machine learning approach proved quite successful [?].
Many security applications, e.g. for access control, use face recognition as
one of its components. That is, given the photo (or video recording) of a
person, recognize who this person is. In other words, the system needs to
classify the faces into one of many categories (Alice, Bob, Charlie, . . . ) or
decide that it is an unknown face. A similar, yet conceptually quite different
problem is that of verification. Here the goal is to verify whether the person
in question is who he claims to be. Note that differently to before, this
is now a yes/no question. To deal with different lighting conditions, facial
expressions, whether a person is wearing glasses, hairstyle, etc., it is desirable
to have a system which learns which features are relevant for identifying a
person.
Another application where learning helps is the problem of named entity
recognition (see Figure 1.4). That is, the problem of identifying entities,
such as places, titles, names, actions, etc. from documents. Such steps are
crucial in the automatic digestion and understanding of documents. Some
modern e-mail clients, such as Apple’s Mail.app nowadays ship with the
ability to identify addresses in mails and filing them automatically in an
address book. While systems using hand-crafted rules can lead to satisfac-
tory results, it is far more efficient to use examples of marked-up documents
to learn such dependencies automatically, in particular if we want to de- ploy
our system in many languages. For instance, while ’bush’ and ’rice’

Customers Who Bought This Item Also Bought

Pattern Recognit ion and Artificial Intelligence: A The Elements of Statistical Pattern Classification (2nd Data Mining: Practical

Machine Learning Modern Approach (2nd Learning by T. Hastie Edition) by Richard O. Machine Learning Tools
(Information Science and Edition) (Prentice Hall (25) $72.20 Duda and Techniques, Second
Statistics) by Christopher Series in Artif icial (25) $115.00 Edition (Morgan Kauf mann
M. Bishop Intelligence) by Stuart Series in Data
(30) $60.50 Russell Management Systems) by
(76) $115.00 Ian H. Witten
(21) $39.66

Fig. 1.2. Books recommended by Amazon.com when viewing Tom Mitchell’s Ma-
chine Learning Book [Mit97]. It is desirable for the vendor to recommend relevant
books which a user might purchase.

Fig. 1.3. 11 Pictures of the same person taken from the Yale face recognition
database. The challenge is to recognize that we are dealing with the same per-
son in all 11 cases.
6 1 Introduction
HAVANA (Reuters) - The European Union’s top development aid official
left Cuba on Sunday convinced that EU diplomatic sanctions against
the communist island should be dropped after Fidel Castro’s
retirement, his main aide said.
<TYPE="ORGANIZATION">HAVANA</> (<TYPE="ORGANIZATION">Reuters</>) - The
<TYPE="ORGANIZATION">European Union</>’s top development aid official left
<TYPE="ORGANIZATION">Cuba</> on Sunday convinced that EU diplomatic sanctions
against the communist <TYPE="LOCATION">island</> should be dropped after
<TYPE="PERSON">Fidel Castro</>’s retirement, his main aide said.

Fig. 1.4. Named entity tagging of a news article (using LingPipe). The relevant
locations, organizations and persons are tagged for further information extraction.

are clearly terms from agriculture, it is equally clear that in the context of
contemporary politics they refer to members of the Republican Party.
Other applications which take advantage of learning are speech recog-
nition (annotate an audio sequence with text, such as the system shipping
with Microsoft Vista), the recognition of handwriting (annotate a sequence
of strokes with text, a feature common to many PDAs), trackpads of com-
puters (e.g. Synaptics, a major manufacturer of such pads derives its name
from the synapses of a neural network), the detection of failure in jet en-
gines, avatar behavior in computer games (e.g. Black and White), direct
marketing (companies use past purchase behavior to guesstimate whether
you might be willing to purchase even more) and floor cleaning robots (such
as iRobot’s Roomba). The overarching theme of learning problems is that
there exists a nontrivial dependence between some observations, which we
will commonly refer to as x and a desired response, which we refer to as y,
for which a simple set of deterministic rules is not known. By using learning
we can infer such a dependency between x and y in a systematic fashion.
We conclude this section by discussing the problem of classification, since
it will serve as a prototypical problem for a significant part of thisbook.
It occurs frequently in practice: for instance, when performing spam filtering,
we are interested in a yes/no answer as to whether an e-mail con- tains
relevant information or not. Note that this issue is quite user depen- dent: for
a frequent traveller e-mails from an airline informing him about recent
discounts might prove valuable information, whereas for many other
recipients this might prove more of an nuisance (e.g. when the e-mail relates
to products available only overseas). Moreover, the nature of annoying e-
mails might change over time, e.g. through the availability of new products
(Viagra, Cialis, Levitra, . . . ), different opportunities for fraud (the Nigerian
419 scam which took a new twist after the Iraq war), or different data types
(e.g. spam which consists mainly of images). To combat these problems we
1.1 A Taste of Machine Learning 7

Fig. 1.5. Binary classification; separate stars from diamonds. In this example we
are able to do so by drawing a straight line which separates both sets. We will see
later that this is an important example of what is called a linear classifier.

want to build a system which is able to learn how to classify new e-mails.
A seemingly unrelated problem, that of cancer diagnosis shares a common
structure: given histological data (e.g. from a microarray analysis of a pa-
tient’s tissue) infer whether a patient is healthy or not. Again, we are asked
to generate a yes/no answer given a set of observations. See Figure 1.5 for
an example.

1.1.2 Data
It is useful to characterize learning problems according to the type of data
they use. This is a great help when encountering new challenges, since quite
often problems on similar data types can be solved with very similar tech-
niques. For instance natural language processing and bioinformatics use very
similar tools for strings of natural language text and for DNA sequences.
Vectors constitute the most basic entity we might encounter in our work. For
instance, a life insurance company might be interesting in obtaining the
vector of variables (blood pressure, heart rate, height, weight, cholesterol
level, smoker, gender) to infer the life expectancy of a potential customer.
A farmer might be interested in determining the ripeness of fruit based on
(size, weight, spectral data). An engineer might want to find dependencies
in (voltage, current) pairs. Likewise one might want to represent documents
by a vector of counts which describe the occurrence of words. The latter is
commonly referred to as bag of words features.
One of the challenges in dealing with vectors is that the scales and units of
different coordinates may vary widely. For instance, we could measure the
height in kilograms, pounds, grams, tons, stones, all of which would amount
to multiplicative changes. Likewise, when representing temperatures, we
have a full class of affine transformations, depending on whether we rep-
resent them in terms of Celsius, Kelvin or Farenheit. One way of dealing
8 1 Introduction

with those issues in an automatic fashion is to normalize the data. We will


discuss means of doing so in an automatic fashion.
Lists: In some cases the vectors we obtain may contain a variable number
of features. For instance, a physician might not necessarily decide to perform
a full battery of diagnostic tests if the patient appears to be healthy.
Sets may appear in learning problems whenever there is a large number of
potential causes of an effect, which are not well determined. For instance, it is
relatively easy to obtain data concerning the toxicity of mushrooms. It would
be desirable to use such data to infer the toxicity of a new mushroom given
information about its chemical compounds. However, mushrooms contain a
cocktail of compounds out of which one or more may be toxic. Consequently
we need to infer the properties of an object given a set of features, whose
composition and number may vary considerably.
Matrices are a convenient means of representing pairwise relationships.
For instance, in collaborative filtering applications the rows of the matrix may
represent users whereas the columns correspond to products. Only in some
cases we will have knowledge about a given (user, product) combina- tion,
such as the rating of the product by a user.
A related situation occurs whenever we only have similarity information
between observations, as implemented by a semi-empirical distance mea-
sure. Some homology searches in bioinformatics, e.g. variants of BLAST
[AGML90], only return a similarity score which does not necessarily satisfy
the requirements of a metric.
Images could be thought of as two dimensional arrays of numbers, that is,
matrices. This representation is very crude, though, since they exhibit spa-
tial coherence (lines, shapes) and (natural images exhibit) a multiresolution
structure. That is, downsampling an image leads to an object which has very
similar statistics to the original image. Computer vision and psychooptics
have created a raft of tools for describing these phenomena.
Video adds a temporal dimension to images. Again, we could represent
them as a three dimensional array. Good algorithms, however, take the tem-
poral coherence of the image sequence into account.
Trees and Graphs are often used to describe relations between collec-
tions of objects. For instance the ontology of webpages of the DMOZ project
(www.dmoz.org) has the form of a tree with topics becoming increasingly
refined as we traverse from the root to one of the leaves (Arts → Animation
→ Anime → General Fan Pages → Official Sites). In the case of gene ontol- ogy
the relationships form a directed acyclic graph, also referred to as the GO-
DAG [ABB+ 00].
Both examples above describe estimation problems where our observations
1.1 A Taste of Machine Learning 9

are vertices of a tree or graph. However, graphs themselves may be the


observations. For instance, the DOM-tree of a webpage, the call-graph of
a computer program, or the protein-protein interaction networks may form
the basis upon which we may want to perform inference.
Strings occur frequently, mainly in the area of bioinformatics and natural
language processing. They may be the input to our estimation problems, e.g.
when classifying an e-mail as spam, when attempting to locate all names of
persons and organizations in a text, or when modeling the topic structure
of a document. Equally well they may constitute the output of a system.
For instance, we may want to perform document summarization, automatic
translation, or attempt to answer natural language queries.
Compound structures are the most commonly occurring object. That is,
in most situations we will have a structured mix of different data types. For
instance, a webpage might contain images, text, tables, which in turn contain
numbers, and lists, all of which might constitute nodes on a graph of
webpages linked among each other. Good statistical modelling takes such de-
pendencies and structures into account in order to tailor sufficiently flexible
models.

1.1.3 Problems
The range of learning problems is clearly large, as we saw when discussing
applications. That said, researchers have identified an ever growing number
of templates which can be used to address a large set of situations. It is those
templates which make deployment of machine learning in practice easy and
our discussion will largely focus on a choice set of such problems. We now
give a by no means complete list of templates.
Binary Classification is probably the most frequently studied problem
in machine learning and it has led to a large number of important algorithmic
and theoretic developments over the past century. In its simplest form it
reduces to the question: given a pattern x drawn from a domain X, estimate
which value an associated binary random variable y ∈ {±1} will assume.
For instance, given pictures of apples and oranges, we might want to state
whether the object in question is an apple or an orange. Equally well, we
might want to predict whether a home owner might default on his loan,
given income data, his credit history, or whether a given e-mail is spam or
ham. The ability to solve this basic problem already allows us to address a
large variety of practical settings.
There are many variants exist with regard to the protocol in which we are
required to make our estimation:
10 1 Introduction

Fig. 1.6. Left: binary classification. Right: 3-class classification. Note that in thelatter
case we have much more degree for ambiguity. For instance, being able to distinguish
stars from diamonds may not suffice to identify either of them correctly, since we
also need to distinguish both of them from triangles.

• We might see a sequence of (xi, yi) pairs for which yi needs to be estimated
in an instantaneous online fashion. This is commonly referred to as online
learning.
• We might observe a collection X := {x1, . . . x m} and Y := {y1, . . . ym} of
pairs (xi, yi) which are then used to estimate y for a (set of) so-far unseen
J J J
}
X = x 1, . . . , x m′ . This is commonly referred to as batch learning.
• We might be allowed to know XJ already at the time of constructing the
model. This is commonly referred to as transduction.
• We might be allowed to choose X for the purpose of model building. This
is known as active learning.
• We might not have full information about X, e.g. some of the coordinates
of the xi might be missing, leading to the problem of estimation with
missing variables.
• The sets X and XJ might come from different data sources, leading to the
problem of covariate shift correction.
• We might be given observations stemming from two problems at the same
time with the side information that both problems are somehow related.
This is known as co-training.
• Mistakes of estimation might be penalized differently depending on the
type of error, e.g. when trying to distinguish diamonds from rocks a very
asymmetric loss applies.
Multiclass Classification is the logical extension of binary classifica- tion.
The main difference is that now y ∈ {1, . . . , n} may assume a range of
different values. For instance, we might want to classify a document ac-
cording to the language it was written in (English, French, German, Spanish,
Hindi, Japanese, Chinese, . . . ). See Figure 1.6 for an example. The main dif-
ference to before is that the cost of error may heavily depend on the type of
1.1 A Taste of Machine Learning 11

Fig. 1.7. Regression estimation. We are given a number of instances (indicated by


black dots) and would like to find some function f mapping the observations X to
R such that f (x) is close to the observed values.

error we make. For instance, in the problem of assessing the risk of cancer, it
makes a significant difference whether we mis-classify an early stage of can-
cer as healthy (in which case the patient is likely to die) or as an advanced
stage of cancer (in which case the patient is likely to be inconvenienced from
overly aggressive treatment).
Structured Estimation goes beyond simple multiclass estimation by
assuming that the labels y have some additional structure which can be used
in the estimation process. For instance, y might be a path in an ontology,
when attempting to classify webpages, y might be a permutation, when
attempting to match objects, to perform collaborative filtering, or to rank
documents in a retrieval setting. Equally well, y might be an annotation of
a text, when performing named entity recognition. Each of those problems
has its own properties in terms of the set of y which we might consider
admissible, or how to search this space. We will discuss a number of those
problems in Chapter ??.
Regression is another prototypical application. Here the goal is to esti-
mate a real-valued variable y ∈ R given a pattern x (see e.g. Figure 1.7). For
instance, we might want to estimate the value of a stock the next day, the
yield of a semiconductor fab given the current process, the iron content of
ore given mass spectroscopy measurements, or the heart rate of an athlete,
given accelerometer data. One of the key issues in which regression problems
differ from each other is the choice of a loss. For instance, when estimating
stock values our loss for a put option will be decidedly one-sided. On the
other hand, a hobby athlete might only care that our estimate of the heart
rate matches the actual on average.
Novelty Detection is a rather ill-defined problem. It describes the issue
of determining “unusual” observations given a set of past measurements.
Clearly, the choice of what is to be considered unusual is very subjective.
A commonly accepted notion is that unusual events occur rarely. Hence a
possible goal is to design a system which assigns to each observation a rating
12 1 Introduction

Fig. 1.8. Left: typical digits contained in the database of the US Postal Service.
Right: unusual digits found by a novelty detection algorithm [SPST+ 01] (for a
description of the algorithm see Section 7.4). The score below the digits indicates the
degree of novelty. The numbers on the lower right indicate the class associated with
the digit.

as to how novel it is. Readers familiar with density estimation might contend
that the latter would be a reasonable solution. However, we neither need a
score which sums up to 1 on the entire domain, nor do we care particularly
much about novelty scores for typical observations. We will later see how this
somewhat easier goal can be achieved directly. Figure 1.8 has an example of
novelty detection when applied to an optical character recognition database.

1.2 Probability Theory


In order to deal with the instances of where machine learning can be used, we
need to develop an adequate language which is able to describe the problems
concisely. Below we begin with a fairly informal overview over probability
theory. For more details and a very gentle and detailed discussion see the
excellent book of [BT03].

1.2.1 Random Variables


Assume that we cast a dice and we would like to know our chances whether
we would see 1 rather than another digit. If the dice is fair all six outcomes
X = {1, . . . , 6} are equally likely to occur, hence we would see a 1 in roughly
1 out of 6 cases. Probability theory allows us to model uncertainty in the out-
come of such experiments. Formally we state that 1 occurs with probability
1
6.
In many experiments, such as the roll of a dice, the outcomes are of a
numerical nature and we can handle them easily. In other cases, the outcomes
may not be numerical, e.g., if we toss a coin and observe heads or tails. In
these cases, it is useful to associate numerical values to the outcomes. This
is done via a random variable. For instance, we can let a random variable
1.2 Probability Theory 13

X take on a value +1 whenever the coin lands heads and a value of −1


otherwise. Our notational convention will be to use uppercase letters, e.g., X,
Y etc to denote random variables and lower case letters, e.g., x, y etc to
denote the values they take.

height
ξ(x)

Fig. 1.9. The random variable ξ maps from the set of outcomes of an experiment
(denoted here by X) to real numbers. As an illustration here X consists of the
patients a physician might encounter, and they are mapped via ξ to their weight
and height.

1.2.2 Distributions
Perhaps the most important way to characterize a random variable is to
associate probabilities with the values it can take. If the random variable is
discrete, i.e., it takes on a finite number of values, then this assignment of
probabilities is called a probability mass function or PMF for short. A PMF
must be, by definition, non-negative and must sum to one. For instance,
if the coin is fair, i.e., heads and tails are equally likely, then the random
variable X described above takes on values of +1 and −1 with probability
0.5. This can be written as
Pr(X = +1) = 0.5 and Pr(X = −1) = 0.5. (1.1)
When there is no danger of confusion we will use the slightly informal no-
tation p(x) := Pr(X = x).
In case of a continuous random variable the assignment of probabilities
results in a probability density function or PDF for short. With some abuse of
terminology, but keeping in line with convention, we will often use densityor
distribution instead of probability density function. As in the case of the PMF,
a PDF must also be non-negative and integrate to one. Figure 1.10 shows two
distributions: the uniform distribution
( 1
if x ∈ [a, b]
p(x) = b− a (1.2)
0 otherwise,
14 1 Introduction

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0.0 0.0

-4 -2 0 2 4 -4 -2 0 2 4

Fig. 1.10. Two common densities. Left: uniform distribution over the interval
[−1, 1]. Right: Normal distribution with zero mean and unit variance.

and the Gaussian distribution (also called normal distribution)


1 (x − µ)2
p(x) = √ exp — 2σ2 . (1.3)
2πσ 2

Closely associated with a PDF is the indefinite integral over p. It is com-


monly referred to as the cumulative distribution function (CDF).

Definition 1.1 (Cumulative Distribution Function) For a real valued


random variable X with PDF p the associated Cumulative Distribution Func-
tion F is given by
∫ x′
}
F (x ) := Pr X ≤ x
J J
= dp(x). (1.4)
−∞

J
The CDF F (x ) allows us to perform range queries on p efficiently. For
instance, by integral calculus we obtain
∫ b

Pr(a ≤ X ≤ b) = dp(x) = F (b) − F (a). (1.5)


a

The values of xJ for which F (xJ) assumes a specific value, such as 0.1 or 0.5
have a special name. They are called the quantiles of the distribution p.

Definition 1.2 (Quantiles) Let q ∈ (0, 1). Then the value of xJ for which
Pr(X < xJ) ≤ q and Pr(X > xJ) ≤ 1 − q is the q-quantile of the distribution
p. Moreover, the value xJ associated with q = 0.5 is called the median.
1.2 Probability Theory 15

Fig. 1.11. Quantiles of a distribution correspond to the area under the integral of the
density p(x) for which the integral takes on a pre-specified value. Illustratedare
the 0.1, 0.5 and 0.9 quantiles respectively.

1.2.3 Mean and Variance


A common question to ask about a random variable is what its expected value
might be. For instance, when measuring the voltage of a device, we might ask
what its typical values might be. When deciding whether to ad- minister a
growth hormone to a child a doctor might ask what a sensible range of height
should be. For those purposes we need to define expectations and related
quantities of distributions.

Definition 1.3 (Mean) We define the mean of a random variable X as



E[X] := xdp(x) (1.6)

More generally, if f : R → R is a function, then f (X) is also a random variable.


Its mean is mean given by

E[f (X)] := f (x)dp(x). (1.7)

Whenever X is a discrete random variable the integral in (1.6) can be re-


placed by a summation:
Σ
E[X] = xp(x). (1.8)
x

For instance, in the case of a dice we have equal probabilities of 1/6 for all
6 possible outcomes. It is easy to see that this translates into a mean of (1
+ 2 + 3 + 4 + 5 + 6)/6 = 3.5.
The mean of a random variable is useful in assessing expected losses and
benefits. For instance, as a stock broker we might be interested in the ex-
pected value of our investment in a year’s time. In addition to that, however,
we also might want to investigate the risk of our investment. That is, how
likely it is that the value of the investment might deviate from its expecta-
tion since this might be more relevant for our decisions. This means that we
16 1 Introduction

need a variable to quantify the risk inherent in a random variable. One such
measure is the variance of a random variable.

Definition 1.4 (Variance) We define the variance of a random variable


X as
h i
Var[X] := E (X − E[X])2 . (1.9)

As before, if f : R → R is a function, then the variance of f (X) is given by


h i
Var[f (X)] := E (f (X) − E[f (X)])2 . (1.10)

The variance measures by how much on average f (X) deviates from its ex-
pected value. As we shall see in Section 2.1, an upper bound on the variance
can be used to give guarantees on the probability that f (X) will be within
ϵ of its expected value. This is one of the reasons why the variance is often
associated with the risk of a random variable. Note that often one discusses
properties of a random variable in terms of its standard deviation, which is
defined as the square root of the variance.

1.2.4 Marginalization, Independence, Conditioning, and Bayes


Rule
Given two random variables X and Y , one can write their joint density p(x,
y). Given the joint density, one can recover p(x) by integrating out y.This
operation is called marginalization:

p(x) = dp(x, y). (1.11)
y

If Y is a discrete random variable, then we can replace the integration witha


summation:
Σ
p(x) = p(x, y). (1.12)
y

We say that X and Y are independent, i.e., the values that X takes does
not depend on the values that Y takes whenever

p(x, y) = p(x)p(y). (1.13)

Independence is useful when it comes to dealing with large numbers of ran- dom
variables whose behavior we want to estimate jointly. For instance, whenever
we perform repeated measurements of a quantity, such as when
1.2 Probability Theory 17

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

0.0 0.0

-0.5 -0.5
-0.5 0.0 0.5 1.0 1.5 2.0 -0.5 0.0 0.5 1.0 1.5 2.0

Fig. 1.12. Left: a sample from two dependent random variables. Knowing about
first coordinate allows us to improve our guess about the second coordinate. Right:
a sample drawn from two independent random variables, obtained by randomly
permuting the dependent sample.

measuring the voltage of a device, we will typically assume that the individ-
ual measurements are drawn from the same distribution and that they are
independent of each other. That is, having measured the voltage a number
of times will not affect the value of the next measurement. We will call such
random variables to be independently and identically distributed, or in short,
iid random variables. See Figure 1.12 for an example of a pair of random
variables drawn from dependent and independent distributions respectively.
Conversely, dependence can be vital in classification and regression prob-
lems. For instance, the traffic lights at an intersection are dependent of each
other. This allows a driver to perform the inference that when the lights are
green in his direction there will be no traffic crossing his path, i.e. the other
lights will indeed be red. Likewise, whenever we are given a picture x of a
digit, we hope that there will be dependence between x and its label y.
Especially in the case of dependent random variables, we are interested
in conditional probabilities, i.e., probability that X takes on a particular
value given the value of Y . Clearly Pr(X = rain|Y = cloudy) is higher than
Pr(X = rain|Y = sunny). In other words, knowledge about the value of Y
significantly influences the distribution of X. This is captured via conditional
probabilities:
p(x, y)
p(x|y) := . (1.14)
p(y)

Equation 1.14 leads to one of the key tools in statistical inference.

Theorem 1.5 (Bayes Rule) Denote by X and Y random variables then


18 1 Introduction

the following holds


p(x|y)p(y)
p(y |x) = . (1.15)
p(x)
This follows from the fact that p(x, y) = p(x|y)p(y) = p(y|x)p(x). The key
consequence of (1.15) is that we may reverse the conditioning between a pair
of random variables.

1.2.4.1 An Example
We illustrate our reasoning by means of a simple example — inference using
an AIDS test. Assume that a patient would like to have such a test carried
out on him. The physician recommends a test which is guaranteed to detect
HIV-positive whenever a patient is infected. On the other hand, for healthy
patients it has a 1% error rate. That is, with probability 0.01 it diagnoses
a patient as HIV-positive even when he is, in fact, HIV-negative. Moreover,
assume that 0.15% of the population is infected.
Now assume that the patient has the test carried out and the test re-
turns ’HIV-negative’. In this case, logic implies that he is healthy, since the
test has 100% detection rate. In the converse case things are not quite as
straightforward. Denote by X and T the random variables associated with
the health status of the patient and the outcome of the test respectively. We
are interested in p(X = HIV+|T = HIV+). By Bayes rule we may write
p(T = HIV+|X = HIV+)p(X = HIV+)
p(X = HIV+ |T = HIV+) =
p(T = HIV+)
While we know all terms in the numerator, p(T = HIV+) itself is unknown.
That said, it can be computed via
Σ
p(T = HIV+) = p(T = HIV+, x)
x∈ {HIV+,HIV-}
Σ
= p(T = HIV+|x)p(x)
x∈ {HIV+,HIV-}

= 1.0 · 0.0015 + 0.01 · 0.9985.


Substituting back into the conditional expression yields
1.0 · 0.0015
p(X = HIV+ |T = HIV+) = = 0.1306.
1.0 · 0.0015 + 0.01 · 0.9985
In other words, even though our test is quite reliable, there is such a low prior
probability of having been infected with AIDS that there is not much evidence
to accept the hypothesis even after this test.
1.2 Probability Theory 19

Fig. 1.13. A graphical description of our HIV testing scenario. Knowing the age of
the patient influences our prior on whether the patient is HIV positive (the random
variable X). The outcomes of the tests 1 and 2 are independent of each other given
the status X. We observe the shaded random variables (age, test 1, test 2) and
would like to infer the un-shaded random variable X. This is a special case of a
graphical model which we will discuss in Chapter ??.

Let us now think how we could improve the diagnosis. One way is to ob-
tain further information about the patient and to use this in the diagnosis.
For instance, information about his age is quite useful. Suppose the patient is
35 years old. In this case we would want to compute p(X = HIV+|T = HIV+,
A = 35) where the random variable A denotes the age. The corre- sponding
expression yields:
p(T = HIV+|X = HIV+, A)p(X = HIV+|A)
p(T = HIV+|A)
Here we simply conditioned all random variables on A in order to take addi-
tional information into account. We may assume that the test is independent
of the age of the patient, i.e.
p(t|x, a) = p(t|x).
What remains therefore is p(X = HIV+|A). Recent US census data pegs this
number at approximately 0.9%. Plugging all data back into the conditional
1·0.009
expression yields = 0.48. What has happened here is that
1· 0 .009+0.01·0.991

by including additional observed random variables our estimate has become


more reliable. Combination of evidence is a powerful tool. In our case it
helped us make the classification problem of whether the patient is HIV-
positive or not more reliable.
A second tool in our arsenal is the use of multiple measurements. After the
first test the physician is likely to carry out a second test to confirm the
diagnosis. We denote by T1 and T2 (and t1, t2 respectively) the two tests.
Obviously, what we want is that T2 will give us an “independent” second
opinion of the situation. In other words, we want to ensure that T2 does
not make the same mistakes as T1. For instance, it is probably a bad idea
to repeat T1 without changes, since it might perform the same diagnostic
20 1 Introduction

mistake as before. What we want is that the diagnosis of T2 is independent of


that of T2 given the health status X of the patient. This is expressed as
p(t1, t2|x) = p(t1|x)p(t2|x). (1.16)
See Figure 1.13 for a graphical illustration of the setting. Random variables
satisfying the condition (1.16) are commonly referred to as conditionally
independent. In shorthand we write T1, T2 ⊥ X. For the sake of the argument
we assume that the statistics for T2 are given by
p(t2|x) x = HIV- x = HIV+
t2 = HIV- 0.95 0.01
t2 = HIV+ 0.05 0.99
Clearly this test is less reliable than the first one. However, we may now
combine both estimates to obtain a very reliable estimate based on the
combination of both events. For instance, for t1 = t2 = HIV+ we have
1.0 · 0.99 · 0.009
p(X = HIV+|T1 = HIV+, T2 = HIV+) = = 0.95.
1.0 · 0.99 · 0.009 + 0.01 · 0.05 · 0.991
In other words, by combining two tests we can now confirm with very high
confidence that the patient is indeed diseased. What we have carried out is a
combination of evidence. Strong experimental evidence of two positive tests
effectively overcame an initially very strong prior which suggested that the
patient might be healthy.
Tests such as in the example we just discussed are fairly common. For
instance, we might need to decide which manufacturing procedure is prefer-
able, which choice of parameters will give better results in a regression es-
timator, or whether to administer a certain drug. Note that often our tests
may not be conditionally independent and we would need to take this into
account.

1.3 Basic Algorithms


We conclude our introduction to machine learning by discussing four simple
algorithms, namely Naive Bayes, Nearest Neighbors, the Mean Classifier,
and the Perceptron, which can be used to solve a binary classification prob-
lem such as that described in Figure 1.5. We will also introduce the K-means
algorithm which can be employed when labeled data is not available. All
these algorithms are readily usable and easily implemented from scratch in
their most basic form.
For the sake of concreteness assume that we are interested in spam filter-
ing. That is, we are given a set of m e-mails xi, denoted by X := {x1, . . . , xm}
1.3 Basic Algorithms 21
From: "LucindaParkison497072" <[email protected]>
To: <[email protected]>
Subject: we think ACGU is our next winner
Date: Mon, 25 Feb 2008 00:01:01 -0500
MIME-Version: 1.0
X-OriginalArrivalTime: 25 Feb 2008 05:01:01.0329 (UTC) FILETIME=[6A931810:01C8776B]
Return-Path: [email protected]

(ACGU) .045 UP 104.5%

I do think that (ACGU) at it’s current levels looks extremely attractive.

Asset Capital Group, Inc., (ACGU) announced that it is expanding the marketing of bio-remediation fluids and cleaning equipment. After
its recent acquisition of interest in American Bio-Clean Corporation and an 80

News is expected to be released next week on this growing company and could drive the price even higher. Buy (ACGU) Monday at open. I
believe those involved at this stage could enjoy a nice ride up.

Fig. 1.14. Example of a spam e-mail

x1: The quick brown fox jumped over the lazy dog.
x2: The dog hunts a fox.
the quick brown fox jumped over lazy dog hunts a
x1 2 1 1 1 1 1 1 1 0 0
x2 1 0 0 1 0 0 0 1 1 1

Fig. 1.15. Vector space representation of strings.

and associated labels yi, denoted by Y := {y1, . . . , ym}. Here the labels sat- isfy
yi ∈ {spam, ham}. The key assumption we make here is that the pairs (xi,
yi) are drawn jointly from some distribution p(x, y) which represents the e-
mail generating process for a user. Moreover, we assume that there is
sufficiently strong dependence between x and y that we will be able to
estimate y given x and a set of labeled instances X, Y.
Before we do so we need to address the fact that e-mails such as Figure 1.14
are text, whereas the three algorithms we present will require data to be
represented in a vectorial fashion. One way of converting text into a vector
is by using the so-called bag of words representation [Mar61, Lew98]. In its
simplest version it works as follows: Assume we have a list of all possible
words occurring in X, that is a dictionary, then we are able to assign a unique
number with each of those words (e.g. the position in the dictionary). Now
we may simply count for each document xi the number of times a given word
j is occurring. This is then used as the value of the j-th coordinateof xi.
Figure 1.15 gives an example of such a representation. Once we have the
latter it is easy to compute distances, similarities, and other statistics directly
from the vectorial representation.
22 1 Introduction

1.3.1 Naive Bayes


In the example of the AIDS test we used the outcomes of the test to infer
whether the patient is diseased. In the context of spam filtering the actual
text of the e-mail x corresponds to the test and the label y is equivalent to the
diagnosis. Recall Bayes Rule (1.15). We could use the latter to infer
p(x|y)p(y)
p(y| x) = .
p(x)
We may have a good estimate of p(y), that is, the probability of receiving
a spam or ham mail. Denote by mham and mspam the number of ham and
spam e-mails in X. In this case we can estimate
m mspam
p(ham) ≈ ham and p(spam) ≈ .
m m

The key problem, however, is that we do not know p(x|y) or p(x). We may
dispose of the requirement of knowing p(x) by settling for a likelihood ratio
p(spam|x) p(x|spam)p(spam)
L(x) := = . (1.17)
p(ham|x) p(x|ham)p(ham)
Whenever L(x) exceeds a given threshold c we decide that x is spam and
consequently reject the e-mail. If c is large then our algorithm is conservative
and classifies an email as spam only if p(spam|x) p(ham|x). On the other
hand, if c is small then the algorithm aggressively classifies emails as spam.
The key obstacle is that we have no access to p(x|y). This is where we make
our key approximation. Recall Figure 1.13. In order to model the distribution
of the test outcomes T1 and T2 we made the assumption that they are
conditionally independent of each other given the diagnosis. Analogously,
we may now treat the occurrence of each word in a document as a separate
test and combine the outcomes in a naive fashion by assuming that
Y
# of words in x
p(x|y) = p(w j |y), (1.18)
j=1

where w j denotes the j-th word in document x. This amounts to the as-
sumption that the probability of occurrence of a word in a document is
independent of all other words given the category of the document. Even
though this assumption does not hold in general – for instance, the word “York”
is much more likely to after the word “New” – it suffices for our purposes (see
Figure 1.16).
This assumption reduces the difficulty of knowing p(x|y) to that of esti-
mating the probabilities of occurrence of individual words w. Estimates for
1.3 Basic Algorithms 23

Fig. 1.16. Naive Bayes model. The occurrence of individual words is independent
of each other, given the category of the text. For instance, the word Viagra is fairly
frequent if y = spam but it is considerably less frequent if y = ham, except when
considering the mailbox of a Pfizer sales representative.

p(w|y) can be obtained, for instance, by simply counting the frequency oc-
currence of the word within documents of a given class. That is, we estimate
Σm Σ# of words in xi , ,
yi = spam and wj = w
p(w|spam) ≈ i=1 j=1
Σm Σ # of words in xi i
i=1
j=1 {yi = spam}
, ,
Here yi = spam and w ji = w equals 1 if and only if xi is labeled as spam
and w occurs as the j-th word in xi. The denominator is simply the total
number of words in spam documents. Similarly one can compute p(w|ham).
In principle we could perform the above summation whenever we see a new
document x. This would be terribly inefficient, since each such computation
requires a full pass through X and Y. Instead, we can perform a single pass
through X and Y and store the resulting statistics as a good estimate of the
conditional probabilities. Algorithm 1.1 has details of an implementation.
Note that we performed a number of optimizations: Firstly, the normaliza-
tion by m−spam
1
and m−ham
1
respectively is independent of x, hence we incor-

porate it as a fixed offset. Secondly, since we are computing a product over


a large number of factors the numbers might lead to numerical overflow or
underflow. This can be addressed by summing over the logarithm of terms
rather than computing products. Thirdly, we need to address the issue of
estimating p(w|y) for words w which we might not have seen before. One
way of dealing with this is to increment all counts by 1. This method is
commonly referred to as Laplace smoothing. We will encounter a theoretical
justification for this heuristic in Section 2.3.
This simple algorithm is known to perform surprisingly well, and variants of
it can be found in most modern spam filters. It amounts to what is
commonly known as “Bayesian spam filtering”. Obviously, we may apply it
to problems other than document categorization, too.
24 1 Introduction
Algorithm 1.1 Naive Bayes
Train(X, Y) {reads documents X and labels Y}
Compute dictionary D of X with n words.
Compute m, mham and mspam.
Initialize b := log c +log mham −log mspam to offset the rejection threshold
Initialize p ∈ R2× n with pij = 1, wspam = n, wham = n.
{Count occurrence of each word}
j
{Here xi denotes the number of times word j occurs in document xi}
for i = 1 to m do
if yi = spam then
for j = 1 to n do
j
p0,j ← p0,j + x i
wspam ← wspam + xji
end for
else
for j = 1 to n do
p 1,j ← p 1,j + xji
wham ← wham + xji
end for
end if
end for
{Normalize counts to yield word probabilities}
for j = 1 to n do
p 0,j ← p 0,j/wspam
p 1,j ← p 1,j/wham
end for
Classify(x) {classifies document x}
Initialize score threshold t = −b
for j = 1 to n do
t ← t + xj(log p0,j − log p1,j)
end for
if t > 0 return spam else return ham

1.3.2 Nearest Neighbor Estimators


An even simpler estimator than Naive Bayes is nearest neighbors. In its most
basic form it assigns the label of its nearest neighbor to an observation x
(see Figure 1.17). Hence, all we need to implement it is a distance measure
d(x, xJ) between pairs of observations. Note that this distance need not even
be symmetric. This means that nearest neighbor classifiers can be extremely
1.3 Basic Algorithms 25

Fig. 1.17. 1 nearest neighbor classifier. Depending on whether the query point x is
closest to the star, diamond or triangles, it uses one of the three labels for it.

Fig. 1.18. k-Nearest neighbor classifiers using Euclidean distances. Left: decision
boundaries obtained from a 1-nearest neighbor classifier. Middle: color-coded sets
of where the number of red / blue points ranges between 7 and 0. Right: decision
boundary determining where the blue or red dots are in the majority.

flexible. For instance, we could use string edit distances to compare two
documents or information theory based measures.
However, the problem with nearest neighbor classification is that the esti-
mates can be very noisy whenever the data itself is very noisy. For instance,
if a spam email is erroneously labeled as nonspam then all emails which
are similar to this email will share the same fate. See Figure 1.18 for an
example. In this case it is beneficial to pool together a number of neighbors,
say the k-nearest neighbors of x and use a majority vote to decide the class
membership of x. Algorithm 1.2 has a description of the algorithm. Note
that nearest neighbor algorithms can yield excellent performance when used
with a good distance measure. For instance, the technology underlying the
Netflix progress prize [BK07] was essentially nearest neighbours based.
Note that it is trivial to extend the algorithm to regression. All we need
to change in Algorithm 1.2 is to return the average of the values yi instead of
their majority vote. Figure 1.19 has an example.
Note that the distance computation d(xi, x) for all observations can be-
26 1 Introduction
Algorithm 1.2 k-Nearest Neighbor Classification
Classify(X, Y, x) {reads documents X, labels Y and query x}
for i = 1 to m do
Compute distance d(xi, x)
end for
Compute set I containing indices for the k smallest distances d(xi, x).
return majority label of {yi where i ∈ I}.

Fig. 1.19. k-Nearest neighbor regression estimator using Euclidean distances. Left:
some points (x, y) drawn from a joint distribution. Middle: 1-nearest neighbour
classifier. Right: 7-nearest neighbour classifier. Note that the regression estimate is
much more smooth.

come extremely costly, in particular whenever the number of observations is


large or whenever the observations xi live in a very high dimensional space.
Random projections are a technique that can alleviate the high computa-
tional cost of Nearest Neighbor classifiers. A celebrated lemma by Johnson
and Lindenstrauss [DG03] asserts that a set of m points in high dimensional
Euclidean space can be projected into a O(log m/ϵ2) dimensional Euclidean
space such that the distance between any two points changes only by a fac-
tor of (1 ± ϵ). Since Euclidean distances are preserved, running the Nearest
Neighbor classifier on this mapped data yields the same results but at a
lower computational cost [GIM99].
The surprising fact is that the projection relies on a simple randomized
algorithm: to obtain a d-dimensional representation of n-dimensional ran-
dom observations we pick a matrix R∈Rd× n where each element is drawn
− 1
independently from a normal distribution with n 2 variance and zero mean.
Multiplying x with this projection matrix can be shown to achieve this prop-
erty with high probability. For details see [DG03].
1.3 Basic Algorithms 27

Fig. 1.20. A trivial classifier. Classification is carried out in accordance to which of


the two means µ− or µ + is closer to the test point x. Note that the sets of positive
and negative labels respectively form a half space.

1.3.3 A Simple Classifier


We can use geometry to design another simple classification algorithm [SS02] for
our problem. For simplicity we assume that the observations x ∈ Rd, suchas
the bag-of-words representation of e-mails. We define the means µ+ and µ−
to correspond to the classes y ∈ {±1} via
1 Σ 1 Σ
µ− := x i and µ+ := x i.
m− m+
yi =− 1 yi=1

Here we used m− and m+ to denote the number of observations with label


yi = −1 and yi = +1 respectively. An even simpler approach than using the
nearest neighbor classifier would be to use the class label which corresponds
to the mean closest to a new query x, as described in Figure 1.20.
For Euclidean distances we have

ǁµ− − xǁǁµ+2 = ǁµ− ǁ 2 + ǁxǁ2 — 2 ⟨ µ− , x⟩ and (1.19)


2 2
− xǁ = ǁµ+ǁ + ǁxǁ — 2 ⟨ µ+, x⟩ .
2
(1.20)

Here ⟨ ·, ·⟩ denotes the standard dot product between vectors. Taking differ-
ences between the two distances yields
f (x) := ǁµ+ − xǁ2 − ǁµ− − xǁ2 = 2 ⟨ µ− − µ+, x⟩ + ǁµ− ǁ2 − ǁµ+ǁ2 .
(1.21)
This is a linear function in x and its sign corresponds to the labels we esti-
mate for x. Our algorithm sports an important property: The classification rule
can be expressed via dot products. This follows from
Σ Σ
2
ǁµ+ ǁ = ⟨ µ+ , µ+ ⟩ = ⟨ xi , xj ⟩ and ⟨ µ+ , x⟩ = m−+1 ⟨ xi , x⟩ .
yi =yj =1
m−+2 yi=1
28 1 Introduction

Fig. 1.21. The feature map φ maps observations x from X into a feature space H.
The map φ is a convenient way of encoding pre-processing steps systematically.

Analogous expressions can be computed for µ− . Consequently we may ex-


press the classification rule (1.21) as
Σ
m
f (x) = αi ⟨ xi, x⟩ + b (1.22)
i=1
Σ ⟨ xi , xj ⟩ − m−+2
where b = m−− 2 yi =yj =− 1 yi =yj =1 ⟨ xi, xj⟩ and αi = yi/m yi .
Σ

This offers a number of interesting extensions. Recall that when dealing


with documents we needed to perform pre-processing to map e-mails into a
vector space. In general, we may pick arbitrary maps φ : X → H mapping the
space of observations into a feature space H, as long as the latter is endowed
with a dot product (see Figure 1.21). This means that instead ofdealing with ⟨ x,
xJ⟩ we will be dealing with ⟨ φ(x), φ(xJ)⟩ .
As we will see in Chapter 6, whenever H is a so-called Reproducing Kernel
Hilbert Space, the inner product can be abbreviated in the form of a kernel
function k(x, xJ) which satisfies

k(x, xJ) := φ(x), φ(xJ) . (1.23)

This small modification leads to a number of very powerful algorithm and


it is at the foundation of an area of research called kernel methods. We
will encounter a number of such algorithms for regression, classification,
segmentation, and density estimation over the course of the book. Examples
of suitable k are the polynomial kernel k(x, xJ) = ⟨ x, xJ⟩ d for d ∈ N and the
′2
Gaussian RBF kernel k(x, xJ ) = e− γ ǁx− x ǁ for γ > 0.
The upshot of (1.23) is that our basic algorithm can be kernelized. That
is, we may rewrite (1.21) as
Σ
m
f (x) = αik(xi, x) + b (1.24)
i=1

where as before αi = yi/myi and the offset b is computed analogously. As


1.3 Basic Algorithms 29

Algorithm 1.3 The Perceptron


Perceptron(X, Y) {reads stream of observations (xi, yi)}
Initialize w = 0 and b = 0
while There exists some (xi, yi) with yi(⟨ w, x i⟩ + b) ≤ 0 do
w ← w + yixi and b ← b + yi
end while

Algorithm 1.4 The Kernel Perceptron


KernelPerceptron(X, Y) {reads stream of observations (xi, yi)}
Initialize f = 0
while There exists some (x i, yi) with yif (xi) ≤ 0 do
f ← f + yik(x i, ·) + yi
end while

a consequence we have now moved from a fairly simple and pedestrian lin-
ear classifier to one which yields a nonlinear function f (x) with a rather
nontrivial decision boundary.

1.3.4 Perceptron
In the previous sections we assumed that our classifier had access to a train-
ing set of spam and non-spam emails. In real life, such a set might be difficult
to obtain all at once. Instead, a user might want to have instant results when-
ever a new e-mail arrives and he would like the system to learn immediately
from any corrections to mistakes the system makes.
To overcome both these difficulties one could envisage working with the
following protocol: As emails arrive our algorithm classifies them as spam or
non-spam, and the user provides feedback as to whether the classification is
correct or incorrect. This feedback is then used to improve the performance
of the classifier over a period of time.
This intuition can be formalized as follows: Our classifier maintains a
parameter vector. At the t-th time instance it receives a data point xt, to which
it assigns a label ŷt using its current parameter vector. The true label yt is
then revealed, and used to update the parameter vector of the classifier.Such
algorithms are said to be online. We will now describe perhaps the simplest
classifier of this kind namely the Perceptron [Heb49, Ros58].
Let us assume that the data points x t ∈ Rd, and labels yt ∈ {±1}. As
before we represent an email as a bag-of-words vector and we assign +1 to spam
emails and −1 to non-spam emails. The Perceptron maintains a weight
30 1 Introduction

Fig. 1.22. The Perceptron without bias. Left: at time t we have a weight vector wt
denoted by the dashed arrow with corresponding separating plane (also dashed). For
reference we include the linear separator w∗ and its separating plane (both denoted
by a solid line). As a new observation xt arrives which happens to be mis-classified by
the current weight vector wt we perform an update. Also note the margin between
the point xt and the separating hyperplane defined by w∗. Right: This leads to the
weight vector wt+1 which is more aligned with w∗.

vector w ∈ Rd and classifies xt according to the rule

ŷt := sign{⟨ w, xt ⟩ + b}, (1.25)

where ⟨ w, xt⟩ denotes the usual Euclidean dot product and b is an offset. Note
the similarity of (1.25) to (1.21) of the simple classifier. Just as the latter,
the Perceptron is a linear classifier which separates its domain Rd into two
halfspaces, namely {x| ⟨ w, x⟩ + b > 0} and its complement. If ŷt = yt then
no updates are made. On the other hand, if ŷt /= yt the weight vector si
updated as

w ← w + ytx t and b ← b + yt. (1.26)

Figure 1.22 shows an update step of the Perceptron algorithm. For simplicity
we illustrate the case without bias, that is, where b = 0 and where it remains
unchanged. A detailed description of the algorithm is given in Algorithm 1.3.
An important property of the algorithm is that it performs updates on w
by multiples of the observations xi on which it makes a mistake. Hence we
Σ
may express w as w = i∈ Error yi xi . Just as before, we can replace xi and x
by φ(xi) and φ(x) to obtain a kernelized version of the Perceptron algorithm
[FS99] (Algorithm 1.4).
If the dataset (X, Y) is linearly separable, then the Perceptron algorithm
1.3 Basic Algorithms 31

eventually converges and correctly classifies all the points in X. The rate of
convergence however depends on the margin. Roughly speaking, the margin
quantifies how linearly separable a dataset is, and hence how easy it is to
solve a given classification problem.

Definition 1.6 (Margin) Let w ∈ Rd be a weight vector and let b ∈ R be an


offset. The margin of an observation x ∈ Rd with associated label y is
γ(x, y) := y (⟨ w, x⟩ + b) . (1.27)
Moreover, the margin of an entire set of observations X with labels Y is
γ(X, Y) := min γ(xi, yi). (1.28)
i

Geometrically speaking (see Figure 1.22) the margin measures the distance
of x from the hyperplane defined by {x| ⟨ w, x⟩ + b = 0}. Larger the margin,
the more well separated the data and hence easier it is to find a hyperplane
with correctly classifies the dataset. The following theorem asserts that if
there exists a linear classifier which can classify a dataset with a large mar-
gin, then the Perceptron will also correctly classify the same dataset after
making a small number of mistakes.

Theorem 1.7 (Novikoff’s theorem) Let (X, Y) be a dataset with at least one
example labeled +1 and one example labeled −1. Let R := maxt ǁxtǁ, and
assume that there exists (w∗, b∗) such that ǁw∗ǁ = 1 and γt := yt(⟨ w∗, xt⟩ + b ∗) ≥
(1+R2)(1+(b∗) 2)
γ for all t. Then, the Perceptron will make at most γ2
mistakes.
This result is remarkable since it does not depend on the dimensionality
of the problem. Instead, it only depends on the geometry of the setting,
as quantified via the margin γ and the radius R of a ball enclosing the
observations. Interestingly, a similar bound can be shown for Support Vector
Machines [Vap95] which we will be discussing in Chapter 7.
Proof We can safely ignore the iterations where no mistakes were made and
hence no updates were carried out. Therefore, without loss of generality
assume that the t-th update was made after seeing the t-th observation and
let wt denote the weight vector after the update. Furthermore, for simplicity
assume that the algorithm started with w0 = 0 and b0 = 0. By the update
equation (1.26) we have
⟨ wt, w∗⟩ + btb∗ = ⟨ wt− 1, w∗⟩ + bt− 1b∗ + yt(⟨ xt, w∗⟩ + b∗)
≥ ⟨ wt− 1, w ⟩ + bt− 1b + γ.
∗ ∗
32 1 Introduction

By induction it follows that ⟨ wt , w ⟩ +bt b ≥ tγ. On the other hand we made


∗ ∗

an update because yt(⟨ xt, wt− 1⟩ + b t− 1) < 0. By using ytyt = 1,


2 2 2 2 2 2
ǁwtǁ + bt = ǁwt− 1ǁ + bt− 1 + yt ǁxtǁ + 1 + 2yt(⟨ wt− 1, xt⟩ + bt− 1)
2 2 2
≤ ǁwt− 1ǁ + b t− 1 + ǁxtǁ + 1
Since ǁxtǁ = R we can again apply induction to conclude that ǁwtǁ2 +b2 ≤
2 2
t
t R2 + 1 . Combining the upper and the lower bounds, using the Cauchy-
Schwartz inequality, and ǁw∗ǁ = 1 yields
wt w∗
tγ ≤ ⟨ wt, w∗⟩ + btb∗ = , ∗
bq
t b

¨ = ǁwt ǁ2 + b2 1 + (b∗ )2
w w∗
≤ ¨¨ bt ¨¨ ¨¨ b∗ ¨ t
t
√ √
≤ t(R2 + 1) 1 + (b∗ )2 .
Squaring both sides of the inequality and rearranging the terms yields an
upper bound on the number of updates and hence the number of mistakes.

The Perceptron was the building block of research on Neural Networks


[Hay98, Bis95]. The key insight was to combine large numbers of such net-
works, often in a cascading fashion, to larger objects and to fashion opti-
mization algorithms which would lead to classifiers with desirable properties.
In this book we will take a complementary route. Instead of increasing the
number of nodes we will investigate what happens when increasing the com-
plexity of the feature map φ and its associated kernel k. The advantage of
doing so is that we will reap the benefits from convex analysis and linear
models, possibly at the expense of a slightly more costly function evaluation.

1.3.5 K-Means
All the algorithms we discussed so far are supervised, that is, they assume
that labeled training data is available. In many applications this is too much
to hope for; labeling may be expensive, error prone, or sometimes impossi-
ble. For instance, it is very easy to crawl and collect every page within the
www.purdue.edu domain, but rather time consuming to assign a topic to
each page based on its contents. In such cases, one has to resort to unsuper-
vised learning. A prototypical unsupervised learning algorithm is K-means,
which is clustering algorithm. Given X = {x1, . . . , xm} the goal of K-means
is to partition it into k clusters such that each point in a cluster is similar
to points from its own cluster than with points from some other cluster.
1.3 Basic Algorithms 33

Towards this end, define prototype vectors µ1, . . . , µk and an indicator


vector rij which is 1 if, and only if, xi is assigned to cluster j. To cluster our
dataset we will minimize the following distortion measure, which minimizes
the distance of each point from the prototype vector:
m k
1Σ Σ
J(r, µ) := r ǁxi — µjǁ2 , (1.29)
ij
2 i=1 j=1

where r = {rij}, µ = {µj}, and ǁ · ǁ2 denotes the usual Euclidean square


norm.
Our goal is to find r and µ, but since it is not easy to jointly minimize J
with respect to both r and µ, we will adapt a two stage strategy:
Stage 1 Keep the µ fixed and determine r. In this case, it is easy to see
that the minimization decomposes into m independent problems.
The solution for the i-th data point xi can be found by setting:
2
r ij = 1 if j = argmin ǁx i − µ jǁ′ , (1.30)
j′

and 0 otherwise.
Stage 2 Keep the r fixed and determine µ. Since the r’s are fixed, J is an
quadratic function of µ. It can be minimized by setting the derivative
with respect to µj to be 0:
Σ
m
rij(xi − µj) = 0 for all j. (1.31)
i=1

Rearranging obtains
Σ i rijxi
µ = . (1.32)
j Σ
r
i ij
Σ
Since i rij counts the number of points assigned to cluster j, we are
essentially setting µj to be the sample mean of the points assigned to
cluster j.
The algorithm stops when the cluster assignments do not change signifi-
cantly. Detailed pseudo-code can be found in Algorithm 1.5.
Two issues with K-Means are worth noting. First, it is sensitive to the choice
of the initial cluster centers µ. A number of practical heuristics have been
developed. For instance, one could randomly choose k points from the given
dataset as cluster centers. Other methods try to pick k points from X which
are farthest away from each other. Second, it makes a hard assignment of
every point to a cluster center. Variants which we will encounter later in
34 1 Introduction

Algorithm 1.5 K-Means


Cluster(X) {Cluster dataset X}
Initialize cluster centers µj for j = 1, . . . , k randomly
repeat
for i = 1 to m do
J
Compute j = argminj=1,...,k d(xi, µj)
Set rij′ = 1 and rij = 0 for all jJ /= j
end for
for j = 1 to k do
Σ r x
Compute µΣ= i ij i
j i r ij
end for
until Cluster assignments rij are unchanged
return {µ1, . . . , µk} and rij

the book will relax this. Instead of letting rij ∈ {0, 1} these soft variants
will replace it with the probability that a given xi belongs to cluster j.
The K-Means algorithm concludes our discussion of a set of basic machine
learning methods for classification and regression. They provide a useful
starting point for an aspiring machine learning researcher. In this book we will
see many more such algorithms as well as connections between these basic
algorithms and their more advanced counterparts.

Problems
Problem 1.1 (Eyewitness) Assume that an eyewitness is 90% certain that
a given person committed a crime in a bar. Moreover, assume thatthere
were 50 people in the restaurant at the time of the crime. What is the
posterior probability of the person actually having committed the crime.

Problem 1.2 (DNA Test) Assume the police have a DNA library of 10
million records. Moreover, assume that the false recognition probability is
below 0.00001% per record. Suppose a match is found after a database search
for an individual. What are the chances that the identification is correct? You
can assume that the total population is 100 million people. Hint: compute the
probability of no match occurring first.

Problem 1.3 (Bomb Threat) Suppose that the probability that one of a
thousand passengers on a plane has a bomb is 1 : 1, 000, 000. Assuming that the
probability to have a bomb is evenly distributed among the passengers,
1.3 Basic Algorithms 35
− 12
the probability that two passengers have a bomb is roughly equal to 10 .
Therefore, one might decide to take a bomb on a plane to decrease chances
that somebody else has a bomb. What is wrong with this argument?

Problem 1.4 (Monty-Hall Problem) Assume that in a TV show the


candidate is given the choice between three doors. Behind two of the doors
there is a pencil and behind one there is the grand prize, a car. The candi-
date chooses one door. After that, the showmaster opens another door behind
which there is a pencil. Should the candidate switch doors after that? What
is the probability of winning the car?

Problem 1.5 (Mean and Variance for Random Variables) Denote by


Xi random variables. Prove that in this case
" # " #
Σ Σ Σ Σ
EX1,...XN xi = EXi [xi] and VarX1,...XN xi = VarXi [xi]
i i i i

To show the second equality assume independence of the Xi.

Problem 1.6 (Two Dices) Assume you have a game which uses the max- imum
of two dices. Compute the probability of seeing any of the events
{1, . . . , 6}. Hint: prove first that the cumulative distribution function of the
maximum of a pair of random variables is the square of the original cumu-
lative distribution function.

Problem 1.7 (Matching Coins) Consider the following game: two play- ers
bring a coin each. the first player bets that when tossing the coins both will
match and the second one bets that they will not match. Show that even if one
of the players were to bring a tainted coin, the game still would befair.
Show that it is in the interest of each player to bring a fair coin to the game.
Hint: assume that the second player knows that the first coin favors heads
over tails.

Problem 1.8 (Randomized Maximization) How many observations do you


need to draw from a distribution to ensure that the maximum over them is
larger than 95% of all observations with at least 95% probability? Hint:
generalize the result from Problem 1.6 to the maximum over n random vari-
ables.
Application: Assume we have 1000 computers performing MapReduce [DG08]
and the Reducers have to wait until all 1000 Mappers are finished with their
job. Compute the quantile of the typical time to completion.
36 1 Introduction

Problem 1.9 Prove that the Normal distribution (1.3) has mean µ and
variance σ2. Hint: exploit the fact that p is symmetric around µ.

Problem 1.10 (Cauchy Distribution) Prove that for the density


1
p(x) = (1.33)
π(1 + x2)
mean and variance are undefined. Hint: show that the integral diverges.

Problem 1.11 (Quantiles) Find a distribution for which the mean ex- ceeds
the median. Hint: the mean depends on the value of the high-quantile terms,
whereas the median does not.

Problem 1.12 (Multicategory Naive Bayes) Prove that for multicate- gory
Naive Bayes the optimal decision is given by
Yn
p([x]i |y)

y (x) := argmax p(y) (1.34)
y i=1

where y ∈ Y is the class label of the observation x.



Problem 1.13 (Bayes Optimal Decisions) Denote by y (x) = argmaxy p(y|x)
the label associated with the largest conditional class probability. Prove that
for y∗(x) the probability of choosing the wrong label y is given by
l(x) := 1 − p(y (x)|x).


Moreover, show that y (x) is the label incurring the smallest misclassification
error.

Problem 1.14 (Nearest Neighbor Loss) Show that the expected loss in- curred
by the nearest neighbor classifier does not exceed twice the loss of theBayes
optimal decision.
2

Density Estimation

2.1 Limit Theorems


Assume you are a gambler and go to a casino to play a game of dice. As
it happens, it is your unlucky day and among the 100 times you toss the
dice, you only see ’6’ eleven times. For a fair dice we know that each face
should occur with equal probability 1 . Hence
6 the expected value over 100 draws
is 100
≈ 17,
6
which is considerably more than the eleven times that we
observed. Before crying foul you decide that some mathematical analysis is
in order.
The probability of seeing a particular sequence of m trials out of which n
n m− n
are a ’6’ is given by 61 56 . Moreover, there are mn = n!(m−
m!
n)!
different
sequences of ’6’ and ’not 6’ with proportions n and m−n respectively. Hence
we may compute the probability of seeing a ’6’ only 11 or less via
Σ
11 Σ
11
100 1
i
5
100− i

Pr(X ≤ 11) = p(i) = ≈ 7.0% (2.1)


i 6 6
i=0 i=0

After looking at this figure you decide that things are probably reasonable.
And, in fact, they are consistent with the convergence behavior of a sim-
ulated dice in Figure 2.1. In computing (2.1) we have learned something
useful: the expansion is a special case of a binomial series. The first term

m=10 m=20 m=50 m=100 m=200 m=500

0.3 0.3 0.3 0.3 0.3 0.3

0.2 0.2 0.2 0.2 0.2 0.2

0.1 0.1 0.1 0.1 0.1 0.1

0.0 0.0 0.0 0.0 0.0 0.0


1234 56 1234 56 1234 56 1234 56 1234 56 1234 56

Fig. 2.1. Convergence of empirical means to expectations. From left to right: em-
pirical frequencies of occurrence obtained by casting a dice 10, 20, 50, 100, 200, and
500 times respectively. Note that after 20 throws we still have not observed a single
20
’6’, an event which occurs with only 65 ≈ 2.6% probability.

37
38 2 Density Estimation

counts the number of configurations in which we could observe i times ’6’ in a


sequence of 100 dice throws. The second and third term are the probabilities
of seeing one particular instance of such a sequence.
Note that in general we may not be as lucky, since we may have con- siderably
less information about the setting we are studying. For instance, we might
not know the actual probabilities for each face of the dice, which would be a likely
assumption when gambling at a casino of questionable reputation. Often the
outcomes of the system we are dealing with may be continuous valued random
variables rather than binary ones, possibly even with unknown range. For
instance, when trying to determine the average wage through a questionnaire
we need to determine how many people weneed to ask in order to obtain
a certain level of confidence.
To answer such questions we need to discuss limit theorems. They tell
us by how much averages over a set of observations may deviate from the
corresponding expectations and how many observations we need to draw to
estimate a number of probabilities reliably. For completeness we will present
proofs for some of the more fundamental theorems in Section 2.1.2. They are
useful albeit non-essential for the understanding of the remainder of thebook
and may be omitted.

2.1.1 Fundamental Laws


The Law of Large Numbers developed by Bernoulli in 1713 is one of the
fundamental building blocks of statistical analysis. It states that averages over
a number of observations converge to their expectations given a suffi- ciently
large number of observations and given certain assumptions on the
independence of these observations. It comes in two flavors: the weak and
the strong law.

Theorem 2.1 (Weak Law of Large Numbers) Denote by X1, . . . , Xm


random variables drawn from p(x) with mean µ = EXi [xi] for all i. Moreover
let
1 Σ
m
X̄m := Xi (2.2)
m i=1

be the empirical average over the random variables Xi. Then for any ϵ > 0
the following holds
lim Pr .X̄m − µ. ≤ ϵ = 1. (2.3)
m→∞
2.1 Limit Theorems 39

6
5
4
3
2
1

101 102 103

Fig. 2.2. The mean of a number of casts of a dice. The horizontal straight line
denotes the mean 3.5. The uneven solid line denotes the actual mean X̄n as a
function of the number of draws, given as a semilogarithmic plot. The crosses denote
the outcomes of the dice. Note how X̄n ever more closely approaches the mean 3.5
are we obtain an increasing number of observations.

This establishes that, indeed, for large enough sample sizes, the average will
converge to the expectation. The strong law strengthens this as follows:

Theorem 2.2 (Strong Law of Large Numbers) Under the conditionsof


Theorem 2.1 we have Pr limm→∞ X̄m = µ = 1.

The strong law implies that almost surely (in a measure theoretic sense) X̄m
converges to µ, whereas the weak law only states that for every ϵ the random
variable X̄m will be within the interval [µ − ϵ, µ+ϵ]. Clearly the strong implies
the weak law since the measure of the events X̄m = µ converges to 1, hence
any ϵ-ball around µ would capture this.
Both laws justify that we may take sample averages, e.g. over a number
of events such as the outcomes of a dice and use the latter to estimate their
means, their probabilities (here we treat the indicator variable of the event
as a {0; 1}-valued random variable), their variances or related quantities. We
postpone a proof until Section 2.1.2, since an effective way of proving Theo-
rem 2.1 relies on the theory of characteristic functions which we will discuss
in the next section. For the moment, we only give a pictorial illustration in
Figure 2.2.
− Σ
Once we established that the random variable X̄m = m 1 mi=1Xi con-
verges to its mean µ, a natural second question is to establish how quickly it
converges and what the properties of the limiting distribution of X̄m − µ are.
Note in Figure 2.2 that the initial deviation from the mean is large whereas
as we observe more data the empirical mean approaches the true one.
40 2 Density Estimation

6
5
4
3
2
1

101 102 103

Fig. 2.3. Five instantiations of a running average over outcomes of a toss of a dice.
Note that all of them converge to the mean 3.5. Moreover note that √ they all are
well contained within the upper and lower envelopes given by µ ± VarX [x]/m.

The central limit theorem answers this question exactly by addressing a


slightly more general question, namely whether the sum over a number of
independent random variables where each of them arises from a different
distribution might also have a well behaved limiting distribution. This is
the case as long as the variance of each of the random variables is bounded.
The limiting distribution of such a sum is Gaussian. This affirms the pivotal
role of the Gaussian distribution.

Theorem 2.3 (Central Limit Theorem) Denote by Xi independent ran-


dom variables with means µi and standard deviation σi. Then
" m #− 1 " m #
Zm := Σ Σ X i − µi (2.4)
σ 2
i2 i=1
i=1

converges to a Normal Distribution with zero mean and unit variance.

Note that just like the law of large numbers the central limit theorem (CLT)
is an asymptotic result. That is, only in the limit of an infinite number of
observations will it become exact. That said, it often provides an excellent
approximation even for finite numbers of observations, as illustrated in Fig-
ure 2.4. In fact, the central limit theorem and related limit theorems build the
foundation of what is known as asymptotic statistics.

Example 2.1 (Dice) If we are interested in computing the mean of the


values returned by a dice we may apply the CLT to the sum over m variables
2.1 Limit Theorems 41

which have all mean µ = 3.5 and variance (see Problem 2.1)
VarX [x] = EX [x2 ] − EX [x]2 = (1 + 4 + 9 + 16 + 25 + 36)/6 − 3.52 ≈ 2.92.
− Σ
We now study the random variable Wm := m 1 mi=1 [Xi − 3.5]. Since each
of the terms in the sum has zero mean, also W m’s mean vanishes. Moreover,
Wm is a multiple of Zm of (2.4). Hence we have that Wm converges to a
1
normal distribution with zero mean and standard deviation 2.92m− 2 .
Consequently the average of m tosses of the dice yields a random vari- able
with mean 3.5 and it will approach a normal distribution with variance m− 2
1
2.92. In other words, the empirical mean converges to its average at rate O(m−
1
2 ). Figure 2.3 gives an illustration of the quality of the bounds implied by the

CLT.
One remarkable property of functions of random variables is that in many
conditions convergence properties of the random variables are bestowed upon
the functions, too. This is manifest in the following two results: a variant
of Slutsky’s theorem and the so-called delta method. The former deals with
limit behavior whereas the latter deals with an extension of the central limit
theorem.

Theorem 2.4 (Slutsky’s Theorem) Denote by Xi, Yi sequences of ran-


dom variables with Xi → X and Yi → c for c ∈ R in probability. Moreover,
denote by g(x, y) a function which is continuous for all (x, c). In this case
the random variable g(Xi, Yi) converges in probability to g(X, c).

For a proof see e.g. [Bil68]. Theorem 2.4 is often referred to as the continuous
mapping theorem (Slutsky only proved the result for affine functions). It
means that for functions of random variables it is possible to pull the limiting
procedure into the function. Such a device is useful when trying to prove
asymptotic normality and in order to obtain characterizations of the limiting
distribution.

Theorem 2.5 (Delta Method) Assume that Xn ∈ Rd is asymptotically


normal with a−n2 (Xn − b) → N(0, Σ) for a2 n→ 0. Moreover, assume that
g : Rd → Rl is a mapping which is continuously differentiable at b. In this
case the random variable g(Xn) converges
a−n 2 (g(Xn ) − g(b)) → N(0, [∇ x g(b)]Σ[∇ x g(b)]T ). (2.5)
Proof Via a Taylor expansion we see that
an 2 [g(Xn ) − g(b)] = [∇ x g(ξn )] an 2 (Xn − b)
− T −
(2.6)
42 2 Density Estimation
Here ξn lies on the line segment [b, Xn]. Since Xn → b we have that ξn → b, too.
Since g is continuously differentiable at b we may apply Slutsky’s the- orem
to see that a−n 2 [g(Xn ) − g(b)] → [∇ x g(b)]T a−n2 (Xn − b). As a con- sequence,
the transformed random variable is asymptotically normal with covariance
[∇ x g(b)]Σ[∇ x g(b)]T .
We will use the delta method when it comes to investigating properties of
maximum likelihood estimators in exponential families. There g will play the
role of a mapping between expectations and the natural parametrization of
a distribution.

2.1.2 The Characteristic Function


The Fourier transform plays a crucial role in many areas of mathematical
analysis and engineering. This is equally true in statistics. For historic rea- sons
its applications to distributions is called the characteristic function, which we
will discuss in this section. At its foundations lie standard tools from
functional analysis and signal processing [Rud73, Pap62]. We begin by
recalling the basic properties:

Definition 2.6 (Fourier Transform) Denote by f : R n → C a function defined


on a d-dimensional Euclidean space. Moreover, let x, ω ∈ R n. Then the Fourier
transform F and its inverse F − 1 are given by

− d
F [f ](ω) := (2π) 2 f (x) exp(−i ⟨ ω, x⟩ )dx (2.7)
n
∫R
d
F − 1[g](x) := (2π)− 2 g(ω) exp(i ⟨ ω, x⟩ )dω. (2.8)
Rn

The key insight is that F − 1 ◦ F = F ◦ F − 1 = Id. In other words, F andF − 1


are inverses to each other for all functions which are L2 integrable on Rd,
which includes probability distributions. One of the key advantages of Fourier
transforms is that derivatives and convolutions
d
on f translate into
multiplications. That is F [f ◦ g] = (2π) 2 F [f ] · F [g]. The same rule applies
d
to the inverse transform, i.e. F − 1[f ◦ g] = (2π) 2 F − 1[f ]F − 1[g].
The benefit for statistical analysis is that often problems are more easily
expressed in the Fourier domain and it is easier to prove convergence results
there. These results then carry over to the original domain. We will be
exploiting this fact in the proof of the law of large numbers and the central
limit theorem. Note that the definition of Fourier transforms can be extended
to more general domains such as groups. See e.g. [BCR84] for further details.
2.1 Limit Theorems 43

We next introduce the notion of a characteristic function of a distribution.1

Definition 2.7 (Characteristic Function) Denote by p(x) a distribution of


a random variable X ∈ Rd . Then the characteristic function φX (ω) with ω
∈ Rd is given by ∫
−1
φX (ω) := (2π) 2d F [p(x)] = exp(i ⟨ ω, x⟩ )dp(x). (2.9)

In other words, φX (ω) is the inverse Fourier transform applied to the prob-
ability measure p(x). Consequently φX (ω) uniquely characterizes p(x) and
moreover, p(x) can be recovered from φX (ω) via the forward Fourier trans-
form. One of the key utilities of characteristic functions is that they allow
us to deal in easy ways with sums of random variables.

Theorem 2.8 (Sums of random variables and convolutions) Denote by


X, Y ∈ R two independent random variables. Moreover, denote by Z := X +
Y the sum of both random variables. Then the distribution over Z sat- isfies
p(z) = p(x) ◦ p(y). Moreover, the characteristic function yields:
φZ (ω) = φX (ω)φY (ω). (2.10)
Proof Z is given by Z = X + Y . Hence, for a given Z = z we have the
freedom to choose X = x freely provided that Y = z − x. In terms of
distributions this means that the joint distribution p(z, x) is given by
p(z, x) = p(Y = z − x)p(x)

and hence p(z) = p(Y = z − x)dp(x) = [p(x) ◦ p(y)](z).

The result for characteristic functions follows form the property of the
Fourier transform.
For sums of several random variables the characteristic function is the prod-
uct of the individual characteristic functions. This allows us to prove both
the weak law of large numbers and the central limit theorem (see Figure 2.4
for an illustration) by proving convergence in the Fourier domain.
Proof [Weak Law of Large Numbers] At the heart of our analysis lies
a Taylor expansion of the exponential into
exp(iwx) = 1 + i ⟨ w, x⟩ + o(|w|)
and hence φX (ω) = 1 + iwEX [x] + o(|w|).
1
In Chapter ?? we will discuss more general descriptions of distributions of which φX is a special
case. In particular, we will replace the exponential exp(i ⟨ω, x⟩) by a kernel function k(x, x′ ).
44 2 Density Estimation

1.0 1.0 1.0 1.0 1.0

0.5 0.5 0.5 0.5 0.5

0.0 0.0 0.0 0.0 0.0


-5 0 5 -5 0 5 -5 0 5 -5 0 5 -5 0 5

1.5 1.5 1.5 1.5 1.5

1.0 1.0 1.0 1.0 1.0

0.5 0.5 0.5 0.5 0.5

0.0 0.0 0.0 0.0 0.0


-1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1

Fig. 2.4. A working example of the central limit theorem. The top row contains
distributions of sums of uniformly distributed random variables on the interval [0.5,
0.5]. From left to right we have sums of 1, 2, 4, 8 and 16 random var√
iables. The
bottom row contains the same distribution with the means rescaled by m, where
m is the number of observations. Note how the distribution converges increasingly
to the normal distribution.

Given m random Σvariables Xi with mean EX [x] = µ this means that their
average X̄m := 1 m Xi has the characteristic function
m i=1
m
i
φX̄m (ω) = 1 + wµ + o(m − 1 |w|) (2.11)
m

In the limit of m → ∞ this converges to exp(iwµ), the characteristic func- tion


of the constant distribution with mean µ. This proves the claim that in the
large sample limit X̄m is essentially constant with mean µ.

Proof [Central Limit Theorem] We use the same idea as above to prove
the CLT. The main difference, though, is that we need to assume that the
second moments of the random variables Xi exist. To avoid clutter we only
prove the case of constant mean EXi [xi] = µ and variance VarXi [xi] = σ2.
2.1 Limit Theorems 45
Σm
Let Zm := √ 1 (Xi − µ). Our proof relies on showing convergence
mσ2 i=1
of the characteristic function of Zm, i.e. φZm to that of a normally dis- tributed
random variable W with zero mean and unit variance. Expanding the
exponential to second order yields:
1 2 2
exp(iwx) = 1 + iwx − w x + o(|w|2)
2
1 2
and hence φX (ω) = 1 + iwEX [x] − w VarX [x] + o(|w|2)
2

Since the mean of Zm vanishes by centering (Xi − µ) and the variance per

variable is m 1 we may write the characteristic function of Zm via
1 2 −
m
φZm (ω) = 1 − w + o(m 1 |w|2 )
2m

As before, taking limits m → ∞ yields the exponential function. We have


1 2
that limm→∞ φZm (ω) = exp(− ω 2) which is the characteristic function of
the normal distribution with zero mean and variance 1. Since the character-
istic function transform is injective this proves our claim.
Note that the characteristic function has a number of useful properties. For
instance, it can also be used as moment generating function via the identity:
∇ωn φX (0) = i n EX [xn ].

(2.12)

Its proof is left as an exercise. See Problem 2.2 for details. This connection
also implies (subject to regularity conditions) that if we know the moments
of a distribution we are able to reconstruct it directly since it allows us
to reconstruct its characteristic function. This idea has been exploited in
density estimation [Cra46] in the form of Edgeworth and Gram-Charlier
expansions [Hal92].

2.1.3 Tail Bounds


In practice we never have access to an infinite number of observations. Hence
the central limit theorem does not apply but is just an approximation to the
real situation. For instance, in the case of the dice, we might want to state
worst case bounds for finite sums of random variables to determine by how
much the empirical mean may deviate from its expectation. Those bounds
will not only be useful for simple averages but to quantify the behavior of
more sophisticated estimators based on a set of observations.
The bounds we discuss below differ in the amount of knowledge they
assume about the random variables in question. For instance, we might only
46 2 Density Estimation

know their mean. This leads to the Gauss-Markov inequality. If we know their
mean and their variance we are able to state a stronger bound, the
Chebyshev inequality. For an even stronger setting, when we know that
each variable has bounded range, we will be able to state a Chernoff bound.
Those bounds are progressively more tight and also more difficult to prove.
We state them in order of technical sophistication.

Theorem 2.9 (Gauss-Markov) Denote by X ≥ 0 a random variable and let


µ be its mean. Then for any ϵ > 0 we have
µ
Pr(X ≥ ϵ) ≤ . (2.13)
ϵ
Proof We use the fact that for nonnegative random variables
∫ ∞ ∫ ∞ ∫ ∞
x µ
Pr(X ≥ ϵ) = dp(x) ≤ dp(x) ≤ ϵ−1
xdp(x) = .
є є ϵ 0 ϵ
This means that for random variables with a small mean, the proportion of
samples with large value has to be small.
Consequently deviations from the mean are O(ϵ− 1). However, note that this
bound does not depend on the number of observations. A useful application
of the Gauss-Markov inequality is Chebyshev’s inequality. It is a statement on
the range of random variables using its variance.

Theorem 2.10 (Chebyshev) Denote by X a random variable with mean


µ and variance σ2. Then the following holds for ϵ > 0:
σ2
Pr(|x − µ| ≥ ϵ) ≤ . (2.14)
ϵ2
Proof Denote by Y := |X − µ|2 the random variable quantifying the deviation
of X from its mean µ. By construction we know that EY [y] = σ2. Next let γ :=
ϵ2. Applying Theorem 2.9 to Y and γ yields Pr(Y > γ) ≤ σ2/γ which proves the
claim.
Note the improvement to the Gauss-Markov inequality. Where before we had
bounds whose confidence improved with O(ϵ− 1) we can now state O(ϵ− 2)
bounds for deviations from the mean.
Σ
Example 2.2 (Chebyshev bound) Assume that X̄m := m− 1 mi=1Xi is
the average over m random variables with mean µ and variance σ2. Hence
X̄m also has mean µ. Its variance is given by
Σ m
− 2 −
VarX̄m i
[x̄ ] = 1 2
i=1
m Var [x ] = m σ .m

iX
2.1 Limit Theorems 47

Applying Chebyshev’s inequality yields that the probability of a deviation


σ2
of ϵ from the mean µ is bounded by mє .2 For fixed failure probability δ =
Pr(|X̄m − µ| > ϵ) we have

δ ≤ σ2m 1ϵ 2 and equivalently ϵ ≤ σ/ mδ.
− −

This bound is quite reasonable for large δ but it means that for high levels
of confidence we need a huge number of observations.
Much stronger results can be obtained if we are able to bound the range
of the random variables. Using the latter, we reap an exponential improve-
ment in the quality of the bounds in the form of the McDiarmid [McD89]
inequality. We state the latter without proof:

Theorem 2.11 (McDiarmid) Denote by f : Xm → R a function on X


and let Xi be independent random variables. In this case the following holds:
Pr (|f (x1, . . . , xm) − EX1,...,Xm [f (x1, . . . , xm)]| > ϵ) ≤ 2 exp −2ϵ C2 − 2.
Σm
Here the constant C2 is given by C2 = i=1 c i
2
where
. f (x1 , . . . , xi , . . . , xm ) − f (x1 , . . . , xiJ , . . . , xm ) .≤ ci

for all x1, . . . , x m, xJi and for all i.


This bound can be used for averages of a number of observations when
they are computed according to some algorithm as long as the latter can be
encoded in f . In particular, we have the following bound [Hoe63]:
Theorem 2.12 (Hoeffding) Denote by Xi iid random variables with bounded
− Σ
range Xi ∈ [a, b] and mean µ. Let X̄m := m 1 mi=1Xi be their average.
Then the following bound holds:
2mϵ2
Pr .X̄m − µ. > ϵ ≤ 2 exp − . (2.15)
(b − a)2

Proof This is a corollary of Theorem 2.11. In X̄m each individual random


variable has range [a/m, b/m] and we set f (X1 , . . . , Xm ) := X̄m . Straight-
forward algebra shows that C2 = m− 2(b − a)2. Plugging this back into
McDiarmid’s theorem proves the claim.
Note that (2.15) is exponentially better than the previous bounds. With
increasing sample size the confidence level also increases exponentially.

Example 2.3 (Hoeffding bound) As in example 2.2 assume that Xi are iid
random variables and let X̄m be their average. Moreover, assume that
48 2 Density Estimation

Xi ∈ [a, b] for all i. As before we want to obtain guarantees on the probability


that |X̄m − µ| > ϵ. For a given level of confidence 1 − δ we need to solve
2mє 2
δ ≤2 exp − (2.16)
(b− a)2

for ϵ. Straightforward algebra shows that in this case ϵ needs to satisfy



ϵ ≥ |b − a| [log 2 − log δ] /2m (2.17)
In other words, while the confidence level only enters logarithmically into the
1
inequality, the sample size m improves our confidence only with ϵ = O(m− 2 ).
That is, in order to improve our confidence interval from ϵ = 0.1 to ϵ = 0.01
we need 100 times as many observations.

While this bound is tight (see Problem 2.5 for details), it is possible to ob-
tain better bounds if we know additional information. In particular knowing
a bound on the variance of a random variable in addition to knowing that it
has bounded range would allow us to strengthen the statement considerably.
The Bernstein inequality captures this connection. For details see [BBL05] or
works on empirical process theory [vdVW96, SW86, Vap82].

2.1.4 An Example
It is probably easiest to illustrate the various bounds using a concrete exam-
ple. In a semiconductor fab processors are produced on a wafer. A typical300mm
wafer holds about 400 chips. A large number of processing steps are required
to produce a finished microprocessor and often it is impossibleto assess the
effect of a design decision until the finished product has been produced.
Assume that the production manager wants to change some step from
process ’A’ to some other process ’B’. The goal is to increase the yield of
the process, that is, the number of chips of the 400 potential chips on the
wafer which can be sold. Unfortunately this number is a random variable,
i.e. the number of working chips per wafer can vary widely between different
wafers. Since process ’A’ has been running in the factory for a very long
time we may assume that the yield is well known, say it is µA = 350 out of
400 processors on average. It is our goal to determine whether process ’B’
is better and what its yield may be. Obviously, since production runs are
expensive we want to be able to determine this number as quickly as
possible, i.e. using as few wafers as possible. The production manager is risk
averse and wants to ensure that the new process is really better. Hence he
requires a confidence level of 95% before he will change the production.
2.1 Limit Theorems 49

A first step is to formalize the problem. Since we know process ’A’ exactly
we only need to concern ourselves with ’B’. We associate the random variable Xi
with wafer i. A reasonable (and somewhat simplifying) assumption is to posit
that all Xi are independent and identically distributed where all Xi have the
mean µB. Obviously we do not know µB — otherwise there would be no
reason for testing! We denote by X̄m the average of the yields of m wafers
using process ’B’. What we are interested in is the accuracy ϵ for which the
probability
δ = Pr(|X̄m − µB | > ϵ) satisfies δ ≤ 0.05.
Let us now discuss how the various bounds behave. For the sake of the
argument assume that µB − µA = 20, i.e. the new process produces on
average 20 additional usable chips.

Chebyshev In order to apply the Chebyshev inequality we need to bound


the variance of the random variables Xi. The worst possible variance would
occur if Xi ∈ {0; 400} where both events occur with equal probability. In other
words, with equal probability the wafer if fully usable or it is entirely broken.
This amounts to σ2 = 0.5(200 − 0)2 + 0.5(200 − 400)2 = 40, 000.
Since for Chebyshev bounds we have
δ ≤ σ2m− 1ϵ− 2 (2.18)
we can solve for m = σ2/δϵ2 = 40, 000/(0.05 · 400) = 20, 000. In other words,
we would typically need 20,000 wafers to assess with reasonable confidence
whether process ’B’ is better than process ’A’. This is completely unrealistic.
Slightly better bounds can be obtained if we are able to make better
assumptions on the variance. For instance, if we can be sure that the yield
of process ’B’ is at least 300, then the largest possible variance is 0.25(300 −
0)2 + 0.75(300 − 400)2 = 30, 000, leading to a minimum of 15,000 wafers
which is not much better.

Hoeffding Since the yields are in the interval {0, . . . , 400} we have an ex-
plicit bound on the range of observations. Recall the inequality (2.16) which
bounds the failure probably δ = 0.05 by an exponential term. Solving this
for m yields
m ≥ 0.5|b − a|2ϵ− 2 log(2/δ) ≈ 737.8 (2.19)
In other words, we need at lest 738 wafers to determine whether process ’B’
is better. While this is a significant improvement of almost two orders of
magnitude, it still seems wasteful and we would like to do better.
50 2 Density Estimation

Central Limit Theorem The central limit theorem is an approximation.


This means that our reasoning is not accurate any more. That said, for
large enough sample sizes, the approximation is good enough to use it for
practical predictions. Assume for the moment that we knew the variance σ2
exactly. In this case we know that X̄m is approximately normal with mean
µB and variance m 1σ2. We are interested in the interval [µ − ϵ, µ + ϵ] which

contains 95% of the probability mass of a normal distribution. That is, we


need to solve the integral
∫ µ+є (x − µ)2
1
exp — dx = 0.95 (2.20)
2πσ2 µ− є 2σ2

This can be solved efficiently using the cumulative distribution function of


a normal distribution (see Problem 2.3 for more details). One can check that
(2.20) is solved for ϵ = 2.96σ. In other words, an interval of ±2.96σ contains
95% of the probability mass of a normal distribution. The number of
observations is therefore determined by

ϵ = 2.96σ/ √m and hence m = 8.76 σ


2
(2.21)
ϵ2
Again, our problem is that we do not know the variance of the distribution. Using
the worst-case bound on the variance, i.e. σ2 = 40, 000 would lead toa
requirement of at least m = 876 wafers for testing. However, while we donot
know the variance, we may estimate it along with the mean and use the empirical
estimate, possibly plus some small constant to ensure we do not
underestimate the variance, instead of the upper bound.
Assuming that fluctuations turn out to be in the order of 50 processors,
i.e. σ2 = 2500, we are able to reduce our requirement to approximately 55
wafers. This is probably an acceptable number for a practical test.

Rates and Constants The astute reader will have noticed that all three
confidence bounds had scaling behavior m = O(ϵ− 2). That is, in all cases the
number of observations was a fairly ill behaved function of the amount of
confidence required. If we were just interested in convergence per se, a
statement like that of the Chebyshev inequality would have been entirely
sufficient. The various laws and bounds can often be used to obtain con-
siderably better constants for statistical confidence guarantees. For more
complex estimators, such as methods to classify, rank, or annotate data,
a reasoning such as the one above can become highly nontrivial. See e.g.
[MYA94, Vap98] for further details.
2.2 Parzen Windows 51

2.2 Parzen Windows


2.2.1 Discrete Density Estimation
The convergence theorems discussed so far mean that we can use empir- ical
observations for the purpose of density estimation. Recall the case of the
Naive Bayes classifier of Section 1.3.1. One of the key ingredients was the
ability to use information about word counts for different document classes
to estimate the probability p(w j |y), where w j denoted the number of
occurrences of word j in document x, given that it was labeled y. In the
following we discuss an extremely simple and crude method for estimating
probabilities. It relies on the fact that for random variables Xi drawn from
distribution p(x) with discrete values Xi ∈ X we have

lim p̂X (x) = p(x) (2.22)


m→∞
Σ
m

where p̂X (x) := m −1


{xi = x} for all x ∈ X. (2.23)
i=1

Let us discuss a concrete case. We assume that we have 12 documents and would
like to estimate the probability of occurrence of the word ’dog’ fromit. As
raw data we have:
Document ID 1 2 3 4 5 6 7 8 9 10 11 12
Occurrences of ‘dog’ 1 0 2 0 4 6 3 0 6 2 0 1
This means that the word ‘dog’ occurs the following number of times:
Occurrences of ‘dog’ 0 1 2 3 4 5 6
Number of documents 4 2 2 1 1 0 2
Something unusual is happening here: for some reason we never observed
5 instances of the word dog in our documents, only 4 and less, or alter-
natively 6 times. So what about 5 times? It is reasonable to assume that
the corresponding value should not be 0 either. Maybe we did not sample
enough. One possible strategy is to add pseudo-counts to the observations.
This amounts to the following estimate:
h Σ
m i
p̂X (x) := (m + |X|)− 1 1+ {xi = x} = p(x) (2.24)
i=1

Clearly the limit for m → ∞ is still p(x). Hence, asymptotically we do not


lose anything. This prescription is what we used in Algorithm 1.1 used a
method called Laplace smoothing. Below we contrast the two methods:
52 2 Density Estimation
Occurrences of ‘dog’ 0 1 2 3 4 5 6
Number of documents 4 2 2 1 1 0 2
Frequency of occurrence 0.33 0.17 0.17 0.083 0.083 0 0.17
Laplace smoothing 0.26 0.16 0.16 0.11 0.11 0.05 0.16
The problem with this method is that as |X| increases we need increasingly
more observations to obtain even a modicum of precision. On average, we
will need at least one observation for every x ∈ X. This can be infeasible for
large domains as the following example shows.

Example 2.4 (Curse of Dimensionality) Assume that X = {0, 1}d, i.e.


x consists of binary bit vectors of dimensionality d. As d increases the size of
X increases exponentially, requiring an exponential number of observations
to perform density estimation. For instance, if we work with images, a 100 ×
100 black and white picture would require in the order of 103010 observations
to model such fairly low-resolution images accurately. This is clearly utterly
infeasible — the number of particles in the known universe is in the order
of 1080. Bellman [Bel61] was one of the first to formalize this dilemma by
coining the term ’curse of dimensionality’.
This example clearly shows that we need better tools to deal with high-
dimensional data. We will present one of such tools in the next section.

2.2.2 Smoothing Kernel


We now proceed to proper density estimation. Assume that we want to
estimate the distribution of weights of a population. Sample data from a
population might look as follows: X = {57, 88, 54, 84, 83, 59, 56, 43, 70, 63,
90, 98, 102, 97, 106, 99, 103, 112}. We could use this to perform a density
estimate by placing discrete components at the locations xi ∈ X with weight
1/|X| as what is done in Figure 2.5. There is no reason to believe that weights
are quantized in kilograms, or grams, or miligrams (or pounds and stones).
And even if it were, we would expect that similar weights would have similar
densities associated with it. Indeed, as the right diagram of Figure 2.5 shows,
the corresponding density is continuous.
The key question arising is how we may transform X into a realistic
estimate of the density p(x). Starting with a ’density estimate’ with only
discrete terms
1 Σm
p̂(x) = δ(x − xi) (2.25)
m i=1
2.2 Parzen Windows 53

we may choose to smooth it out by a smoothing kernel h(x) such that the
probability mass becomes somewhat more spread out. For a density estimate on
X ⊆ R d this is achieved by
1 Σ
m
d x xi −r h
p̂(x) = r
m i=1
. (2.26)
This expansion is commonly known as the Parzen windows estimate. Note
that obviously∫h must be chosen such that h(x) ≥ 0 for all x ∈ X and
moreover that h(x)dx = 1 in order to ensure that (2.26) is a proper prob-
ability distribution. We now formally justify this smoothing. Let R be a
small region such that

q= p(x) dx.
R

Out of the m samples drawn from p(x), the probability that k of them fall
in region R is given by the binomial distribution
m k
q (1 − q)m− k.
k
The expected fraction of points falling inside the region can easily be com-
puted from the expected value of the Binomial distribution: E[k/m] = q.
Similarly, the variance can be computed as Var[k/m] = q(1 − q)/m. As m
→ ∞ the variance goes to 0 and hence the estimate peaks around the
expectation. We can therefore set

k ≈ mq.

If we assume that R is so small that p(x) is constant over R, then

q ≈ p(x) · V,

where V is the volume of R. Rearranging we obtain


k
p(x) ≈ . (2.27)
mV
Let us now set R to be a cube with side length r, and define a function
(
1 if |u i | ≤ 12
h(u) =
0 otherwise.
x− xi
Observe that h r is 1 if and only if xi lies inside a cube of size r centered
54 2 Density Estimation

around x. If we let
Σ
m
x − xi
k= h ,
i=1
r
then one can use (2.27) to estimate p via
1 Σ −d
m
x − xi
p̂(x) = r h ,
m
i=1
r

where rd is the volume of the hypercube of size r in d dimensions. By symme-


try, we can interpret this equation as the sum over m cubes centered around
m data points xn. If we replace the cube by any smooth kernel function h(·)
this recovers (2.26).
There exists a large variety of different kernels which can be used for the kernel
density estimate. [Sil86] has a detailed description of the propertiesof a
number of kernels. Popular choices are
1 1 2
x2
h(x) = (2π)− 2e− Gaussian kernel (2.28)
− |x|
h(x) = 12e Laplace kernel (2.29)
h(x) = 3
4max(0, 1 −x ) 2
Epanechnikov kernel (2.30)
1
h(x) = 2χ[− 1,1](x) Uniform kernel (2.31)
h(x) = max(0, 1 − |x|) Triangle kernel. (2.32)
Further kernels are the triweight and the quartic kernel which are basically
powers of the Epanechnikov kernel. For practical purposes the Gaussian ker-
nel (2.28) or the Epanechnikov kernel (2.30) are most suitable. In particular,
the latter has the attractive property of compact support. This means that for
any given density estimate at location x we will only need to evaluate terms
h(xi − x) for which the distance ǁxi − xǁ is less than r. Such expan- sions are
computationally much cheaper, in particular when we make use of fast
nearest neighbor search algorithms [GIM99, IM98]. Figure 2.7 has some
examples of kernels.

2.2.3 Parameter Estimation


So far we have not discussed the issue of parameter selection. It should be
evident from Figure 2.6, though, that it is quite crucial to choose a good kernel
width. Clearly, a kernel that is overly wide will oversmooth any finedetail
that there might be in the density. On the other hand, a very narrow kernel will
not be very useful, since it will be able to make statements onlyabout the
locations where we actually observed data.
2.2 Parzen Windows 55

0.10 0.05

0.04

0.03

0.05

0.02

0.01

0.00 0.00
40 50 60 70 80 90 100 110 40 50 60 70 80 90 100 110

Fig. 2.5. Left: a naive density estimate given a sample of the weight of 18 persons.
Right: the underlying weight distribution.
0.050 0.050 0.050 0.050

0.025 0.025 0.025 0.025

0.000 0.000 0.000 0.000


40 60 80 100 40 60 80 100 40 60 80 100 40 60 80 100

Fig. 2.6. Parzen windows density estimate associated with the 18 observations of the
Figure above. From left to right: Gaussian kernel density estimate with kernelof
width 0.3, 1, 3, and 10 respectively.
1.0 1.0 1.0 1.0

0.5 0.5 0.5 0.5

0.0 0.0 0.0 0.0


-2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2

Fig. 2.7. Some kernels for Parzen windows density estimation. From left to right:
Gaussian kernel, Laplace kernel, Epanechikov kernel, and uniform density.

Moreover, there is the issue of choosing a suitable kernel function. The fact
that a large variety of them exists might suggest that this is a crucial issue. In
practice, this turns out not to be the case and instead, the choice of a
suitable kernel width is much more vital for good estimates. In other words,
size matters, shape is secondary.
The problem is that we do not know which kernel width is best for the
data. If the problem is one-dimensional, we might hope to be able to eyeball
the size of r. Obviously, in higher dimensions this approach fails. A second
56 2 Density Estimation

option would be to choose r such that the log-likelihood of the data is


maximized. It is given by
Y
m Σ
m Σ
m
xi − xj
log p(xi) = −m log m + log r − dh r (2.33)
i=1 i=1 j=1

Remark 2.13 (Log-likelihood) We consider the logarithm of the likeli- hood


for reasons of computational stability to prevent numerical underflow. While

each term p(xi) might be within a suitable range, say 10 2, the prod-uct of
1000 of such terms will easily exceed the exponent of floating point
representations on a computer. Summing over the logarithm, on the other
hand, is perfectly feasible even for large numbers of observations.

Unfortunately computing the log-likelihood is equally infeasible: for decreas-


ing r the only surviving terms in (2.33) are the functions h((xi − xi)/r) =
h(0), since the arguments of all other kernel functions diverge. In other
words, the log-likelihood is maximized when p(x) is peaked exactly at the
locations where we observed the data. The graph on the left of Figure 2.6
shows what happens in such a situation.
What we just experienced is a case of overfitting where our model is too
flexible. This led to a situation where our model was able to explain the
observed data “unreasonably well”, simply because we were able to adjust
our parameters given the data. We will encounter this situation throughout
the book. There exist a number of ways to address this problem.

Validation Set: We could use a subset of our set of observations as an


estimate of the log-likelihood. That is, we could partition the obser-
vations into X := {x1, . . . , xn} and XJ := {xn+1, . . . , xm} and use
the second part for a likelihood score according to (2.33). The second
set is typically called a validation set.
n-fold Cross-validation: Taking this idea further, note that there is no
particular reason why any given xi should belong to X or XJ respec-
tively. In fact, we could use all splits of the observations into sets
X and XJ to infer the quality of our estimate. While this is compu-
tationally infeasible, we could decide to split the observations into
n equally sized subsets, say X1, . . . , Xn and use each of them as a
validation set at a time while the remainder is used to generate a
density estimate.
Typically n is chosen to be 10, in which case this procedure is
2.2 Parzen Windows 57

referred to as 10-fold cross-validation. It is a computationally at-


tractive procedure insofar as it does not require us to change the
basic estimation algorithm. Nonetheless, computation can be costly.
Leave-one-out Estimator: At the extreme end of cross-validation we could
choose n = m. That is, we only remove a single observation at a time
and use the remainder of the data for the estimate. Using the average
over the likelihood scores provides us with an even more fine-grained
estimate. Denote by pi(x) the density estimate obtained by using
X := {x1, . . . , x m} without xi. For a Parzen windows estimate this
is given by h i
Σ − x −x
pi (xi ) = (m − 1)− 1 r d h i j = m p(xi ) − r− d h(0) .
r m− 1
j/=i
(2.34)

Note that this is precisely the term r dh(0) that is removed from
the estimate. It is this term which led to divergent estimates for
r → 0. This means that the leave-one-out log-likelihood estimate
can be computed easily via
Σm h i
L(X) = m log m + log p(x i) − r h(0)
−d
. (2.35)
m− 1
i=1

We then choose r such that L(X) is maximized. This strategy is very


robust and whenever it can be implemented in a computationally
efficient manner, it is very reliable in performing model selection.
An alternative, probably more of theoretical interest, is to choose the scale r
a priori based on the amount of data we have at our disposition. Intuitively,
we need a scheme which ensures that r → 0 as the number of observations
increases m → ∞. However, we need to ensure that this happens slowly
enough that the number of observations within range r keeps on increasing in
order to ensure good statistical performance. For details we refer the reader
to [Sil86]. Chapter ?? discusses issues of model selection for estimators in
general in considerably more detail.

2.2.4 Silverman’s Rule


Assume you are an aspiring demographer who wishes to estimate the popu-
lation density of a country, say Australia. You might have access to a limited
census which, for a random portion of the population determines where they
live. As a consequence you will obtain a relatively high number of samples
58 2 Density Estimation

Fig. 2.8. Nonuniform density. Left: original density with samples drawn from the
distribution. Middle: density estimate with a uniform kernel. Right: density estimate
using Silverman’s adjustment.

of city dwellers, whereas the number of people living in the countryside is


likely to be very small.
If we attempt to perform density estimation using Parzen windows, we will
encounter an interesting dilemma: in regions of high density (i.e. the cities)
we will want to choose a narrow kernel width to allow us to modelthe
variations in population density accurately. Conversely, in the outback,a
very wide kernel is preferable, since the population there is very low.
Unfortunately, this information is exactly what a density estimator itself
could tell us. In other words we have a chicken and egg situation where
having a good density estimate seems to be necessary to come up with a good
density estimate.
Fortunately this situation can be addressed by realizing that we do not
actually need to know the density but rather a rough estimate of the latter.
This can be obtained by using information about the average distance of the
k nearest neighbors of a point. One of Silverman’s rules of thumb [Sil86] is to
choose ri as
c Σ
ri = ǁx − xiǁ . (2.36)
k x∈ kN N (xi)

Typically c is chosen to be 0.5 and k is small, e.g. k = 9 to ensure that the


estimate is computationally efficient. The density estimate is then given by
1 Σ −d
m −
x xi
p(x) = ri h . (2.37)
m ri
i=1

Figure 2.8 shows an example of such a density estimate. It is clear that a


locality dependent kernel width is better than choosing a uniformly constant
kernel density estimate. However, note that this increases the computational
complexity of performing a density estimate, since first the k nearest neigh-
bors need to be found before the density estimate can be carried out.
2.2 Parzen Windows 59

2.2.5 Watson-Nadaraya Estimator


Now that we are able to perform density estimation we may use it to perform
classification and regression. This leads us to an effective method for non-
parametric data analysis, the Watson-Nadaraya estimator [Wat64, Nad65].
The basic idea is very simple: assume that we have a binary classification
problem, i.e. we need to distinguish between two classes. Provided that we
are able to compute density estimates p(x) given a set of observations X we
could appeal to Bayes rule to obtain
Σ
p(x|y)p(y) my
m
· m1y i:yi =y r− d h xi −rx
m i=1 r
1
Σm
p(y|x) = = x . (2.38)
p(x) r− d h i − x
Here we only take the sum over all xi with label yi = y in the numerator.
The advantage of this approach is that it is very cheap to design such an
estimator. After all, we only need to compute sums. The downside, similar
to that of the k-nearest neighbor classifier is that it may require sums (or
search) over a large number of observations. That is, evaluation of (2.38) is
potentially an O(m) operation. Fast tree based representations can be used
to accelerate this [BKL06, KM00], however their behavior depends signifi-
cantly on the dimensionality of the data. We will encounter computationally
more attractive methods at a later stage.
For binary classification (2.38) can be simplified considerably. Assume that
y ∈ {±1}. For p(y = 1|x) > 0.5 we will choose that we should estimatey = 1
and in the converse case we would estimate y = −1. Taking the difference
between twice the numerator and the denominator we can see that the
function
Σ xi − x Σ Σ
i yi h h xi −r x
r
Σ
f (x) = i i
i yi wi (x) (2.39)
r i r
Σ x −x = y i x =:
h i h i− x
can be used to achieve the same goal since f (x) > 0 ⇐⇒ p(y = 1|x) > 0.5.
Note that f (x) is a weighted combination of the labels yi associated with
weights wi(x) which depend on the proximity of x to an observation xi.
In other words, (2.39) is a smoothed-out version of the k-nearest neighbor
classifier of Section 1.3.2. Instead of drawing a hard boundary at the k closest
observation we use a soft weighting scheme with weights wi(x) depending
on which observations are closest.
Note furthermore that the numerator of (2.39) is very similar to the simple
classifier of Section 1.3.3. In fact, for kernels k(x, xJ) such as the Gaussian
RBF kernel, which are also kernels in the sense of a Parzen windows den-

sity estimate, i.e. k(x, xJ) = r− dh x− x ther two terms are identical. This
60 2 Density Estimation

Fig. 2.9. Watson Nadaraya estimate. Left: a binary classifier. The optimal solution
would be a straight line since both classes were drawn from a normal distribution
with the same variance. Right: a regression estimator. The data was generated from
a sinusoid with additive noise. The regression tracks the sinusoid reasonably well.

means that the Watson Nadaraya estimator provides us with an alternative


explanation as to why (1.24) leads to a usable classifier.
In the same fashion as the Watson Nadaraya classifier extends the k-
nearest neighbor classifier we also may construct a Watson Nadaraya re-
gression estimator by replacing the binary labels yi by real-valued values
Σ
yi ∈ R to obtain the regression estimator i yiwi(x). Figure 2.9 has an ex-
ample of the workings of both a regression estimator and a classifier. They
are easy to use and they work well for moderately dimensional data.

2.3 Exponential Families


Distributions from the exponential family are some of the most versatile tools
for statistical inference. Gaussians, Poisson, Gamma and Wishart dis-
tributions all form part of the exponential family. They play a key role in
dealing with graphical models, classification, regression and conditional ran-
dom fields which we will encounter in later parts of this book. Some of the
reasons for their popularity are that they lead to convex optimization prob-
lems and that they allow us to describe probability distributions by linear
models.

2.3.1 Basics
Densities from the exponential family are defined by

p(x; θ) := p0(x) exp (⟨ φ(x), θ⟩ − g(θ)) . (2.40)


2.3 Exponential Families 61
Here p0(x) is a density on X and is often called the base measure, φ(x) is
a map from x to the sufficient statistics φ(x). θ is commonly referred to as
the natural parameter. It lives in the space dual to φ(x). Moreover, g(θ) is a
normalization constant which ensures that p(x) is properly normalized. g is
often referred to as the log-partition function. The name stems from physics
where Z = eg(θ) denotes the number of states of a physical ensemble. g can be
computed as follows:

g(θ) = log exp (⟨ φ(x), θ⟩ ) dx. (2.41)
X

Example 2.5 (Binary Model) A ssume t hat X = {0; 1} and that φ(x) =
x. In this case we have g(θ) = log e0 + eθ = log 1 + eθ . It follows that
θ
p(x = 0; θ) = 1+e1
θ
and p(x = 1; θ) = e θ . In other words, by choosing
1+e
different values of θ one can recover different Bernoulli distributions.

One of the convenient properties of exponential families is that the log-


partition function g can be used to generate moments of the distribution
itself simply by taking derivatives.

Theorem 2.14 (Log partition function) The function g(θ) is convex.


Moreover, the distribution p(x; θ) satisfies
∇ θg(θ) = Ex [φ(x)] and ∇ θ2g(θ) = Varx [φ(x)] . (2.42)

Proof Note that ∇ 2g(θ)θ = Varx [φ(x)] implies that g is convex, since the
covariance matrix is positive semidefinite. To show (2.42) we expand
∫ ∫
X φ(x) exp ⟨ φ(x), θ⟩ dx
∇ θg(θ) = ∫ = φ(x)p(x; θ)dx = Ex [φ(x)] . (2.43)
X exp ⟨ φ(x), θ⟩
Next we take the second derivative to obtain

2
∇θ g(θ) = φ(x) [φ(x) − ∇ θ g(θ)]T p(x; θ)dx (2.44)
X
h i
= Ex φ(x)φ(x)T − Ex [φ(x)] Ex [φ(x)]T (2.45)
which proves the claim. For the first equality we used (2.43). For the second
line we used the definition of the variance.
One may show that higher derivatives ∇ nθg(θ) generate higher order cu-
mulants of φ(x) under p(x; θ). This is why g is often also referred as the cumulant-
generating function. Note that in general, computation of g(θ)
62 2 Density Estimation

is nontrivial since it involves solving a highdimensional integral. For many


cases, in fact, the computation is NP hard, for instance when X is the do- main
of permutations [FJ95]. Throughout the book we will discuss a number of
approximation techniques which can be applied in such a case.
Let us briefly illustrate (2.43) using the binary model of Example 2.5.
eθ θ
We have that ∇θ = 1+eθ and ∇θ2 = (1+e e
θ )2 . This is exactly what we would

have obtained from direct computation of the mean p(x = 1; θ) and variance
p(x = 1; θ) − p(x = 1; θ)2 subject to the distribution p(x; θ).

2.3.2 Examples
A large number of densities are members of the exponential family. Note,
however, that in statistics it is not common to express them in the dot
product formulation for historic reasons and for reasons of notational com-
pactness. We discuss a number of common densities below and show why
they can be written in terms of an exponential family. A detailed description
of the most commonly occurring types are given in a table.
Gaussian Let x, µ ∈ Rd and let Σ ∈ Rd× d where Σ > 0, that is, Σ is a positive
definite matrix. In this case the normal distribution can be expressed
via
d 1 1
p(x) = (2π)− 2|Σ|− 2exp − (x − µ)T Σ− 1 (x − µ) (2.46)
2
1
= exp xT Σ− 1µ + tr − xxT Σ− 1 − c(µ, Σ)
2
where c(µ, Σ) = 12 µT Σ− 1 µ + d2log 2π + 12log |Σ|. By combining the
terms in x into φ(x) := (x, − 12 xx ) we obtain the sufficient statistics
T

of x. The corresponding linear coefficients (Σ− 1µ, Σ− 1) constitute the


natural parameter θ. All that remains to be done to express p(x) in
terms of (2.40) is to rewrite g(θ) in terms of c(µ, Σ). The summary
table on the following page contains details.
Multinomial Another popular distribution is one over k discrete events. In
this case X = {1, . . . , k} and we have in completely generic terms p(x)
Σ k
= πx where πx ≥ 0 and x πx = 1. Now denote by ex ∈ R the x-th unit
vector of the canonical basis, that is ⟨ ex, ex′ ⟩ = 1 if x = xJand 0
otherwise. In this case we may rewrite p(x) via
p(x) = πx = exp (⟨ ex, log π⟩ ) (2.47)
where log π = (log π1, . . . , log πk). In other words, we have succeeded
2.3 Exponential Families 63

in rewriting the distribution as a member of the exponential family


where φ(x) = ex and where θ = log π. Note that in this definition θ
is restricted to a k −1 dimensional manifold (the k dimensional prob-
ability simplex). If we relax those constraints we need to ensure that
p(x) remains normalized. Details are given in the summary table.
Poisson This distribution is often used to model distributions over discrete
events. For instance, the number of raindrops which fall on a given
surface area in a given amount of time, the number of stars in a given
volume of space, or the number of Prussian soldiers killed by horse-
kicks in the Prussian cavalry all follow this distribution. It is given by

e− λλx 1
p(x) = = exp (x log λ − λ) where x ∈ N0 . (2.48)
x! x!

By defining φ(x) = x we obtain an exponential families model. Note


1
that things are a bit less trivial here since x! is the nonuniform

counting measure on N0. The case of the uniform measure which


leads to the exponential distribution is discussed in Problem 2.16.
The reason why many discrete processes follow the Poisson distri-
bution is that it can be seen as the limit over the average of a large
number of Bernoulli draws: denote by z ∈ {0, 1} a random variable with
p(z = 1) = α. Moreover, denote by zn the sum over n draws from this
random variable. In this case zn follows the multinomial

distribution with p(z = nkk) =
k n k
α (1 − α)
we .let
Nown →assume
∞ suchthat
that
n the expected value of zn remains constant.
That is, we rescale α = nλ. In this case we have

n! λk λ n− k

p(zn = k) = 1− (2.49)
(n − k)!k! nk n
"
λk λ n
n! #
k
λ
= 1− nk(n − k)! 1−
k! n n

For n → ∞ the second term converges to e− λ. The third term con-


verges to 1, since we have a product of only 2k terms, each of which
converge to 1. Using the exponential families notation we may check
that E[x] = λ and that moreover also Var[x] = λ.
Beta This is a distribution on the unit interval X = [0, 1] which is very versatile
when it comes to modelling unimodal and bimodal distri-
64 2 Density Estimation

0.40 3.5

0.35
3.0

0.30
2.5

0.25
2.0

0.20

1.5
0.15

1.0
0.10

0.05 0.5

0.00 0.0
0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 0.8 1.0

Fig. 2.10. Left: Poisson distributions with λ = 1, {3, 10 . Right:


} Beta distributions with
a = 2 and b 1, 2, 3,∈5, {7 . Note how} with increasing b the distribution becomes
more peaked close to the origin.

butions. It is given by
Γ(a + b)
p(x) = xa− 1(1 − x)b− 1 . (2.50)
Γ(a)Γ(b)

Taking logarithms we see that this, too, is an exponential families


distribution, since p(x) = exp((a − 1) log x + (b − 1) log(1 − x) + log
Γ(a + b) − log Γ(a) − log Γ(b)).

Figure 2.10 has a graphical description of the Poisson distribution and the Beta
distribution. For a more comprehensive list of exponential family dis- tributions
see the table below and [Fel71, FT94, MN83]. In principle any map φ(x),
domain X with underlying measure µ are suitable, as long as the log-partition
function g(θ) can be computed efficiently.

Theorem 2.15 (Convex feasible domain) The domain of definition Θ


of g(θ) is convex.

Proof By construction g is convex and differentiable everywhere. Hence the


below-sets for all values c with {x|g(x) ≤ c} exist. Consequently the domain of
definition is convex.
Having a convex function is very valuable when it comes to parameter infer-
ence since convex minimization problems have unique minimum values and
global minima. We will discuss this notion in more detail when designing
maximum likelihood estimators.
{ } Σ
Multinomial {1..N } Counting ex log N i=1 e
θi
RN
θ
Exponential N0+ Counting x − log 1 − e (−∞, 0)
1
Poisson N0+ x! x eθ R
Laplace [0, ∞) Lebesgue x log θ 2
(−∞, 0)
Gaussian R Lebesgue x, − 1 x2 1
log 2π − 1 log θ + 1 θ1 R ×(0, ∞)

Rn x, − 12xx n
log 2π − 12log |θ2 | + 1 2θ1T θ2− 1 θ1 Rn ×Cn
T
Lebesgue 2 2 2 θ2
3
2 2 2 √2 1 2 1
2
Inverse Normal [0, ∞)

x
1
−x, − 1 x 1
2 log π − 2 θ θ − 2 log θ (0, ∞)2
Beta [0, 1] 1)Γ(θ2 )
(log x, log (1 − x)) R2

2.3 Exponential Families


x(1− x) log Γ(θ
Γ(θ1+θ )2

Gamma [0, ∞) 1
(log x, −x) log Γ(θ1) − θ1 log θ2 (0, ∞)2
x n+1
Wishart Cn |X| − 2 log |x|, − 12x −θ 1 log |θ2 | + θ1 n log 2 R ×Cn
Σ n −
+ i=1 log Γ θ1 + 1 2i
Q Σn Σ
Dirichlet Sn ( ni=1 xi )− 1 (log x1 , . . . , log xn ) log Γ(θi ) − log Γ ( n θi ) (R+ )n
2x i=1 i=1
− 1
Inverse χ2 R+ e — log x (θ − 1) log 2 + log(θ − 1) (0, ∞)
Logarithmic N 1
x
x log(− log(1 − eθ)) (−∞, 0)

Conjugate Θ Lebesgue (θ, −g(θ)) generic

Sn denotes the probability simplex in n dimensions. Cn is the cone of positive semidefinite matrices in Rn× n.

65
66 2 Density Estimation

2.4 Estimation
In many statistical problems the challenge is to estimate parameters of in-
terest. For instance, in the context of exponential families, we may want
to estimate a parameter θˆ such that it is close to the “true” parameter θ∗
in the distribution. While the problem is fully general, we will describe the
relevant steps in obtaining estimates for the special case of the exponential
family. This is done for two reasons — firstly, exponential families are an
important special case and we will encounter slightly more complex variants
on the reasoning in later chapters of the book. Secondly, they are of a suffi-
ciently simple form that we are able to show a range of different techniques.
In more advanced applications only a small subset of those methods may be
practically feasible. Hence exponential families provide us with a working
example based on which we can compare the consequences of a number of
different techniques.

2.4.1 Maximum Likelihood Estimation


Whenever we have a distribution p(x; θ) parametrized by some parameter
θ we may use data to find a value of θ which maximizes the likelihood that
the data would have been generated by a distribution with this choice of
parameter.
For instance, assume that we observe a set of temperature measurements
X = {x1, . . . , xm}. In this case, we could try finding a normal distribution
such that the likelihood p(X; θ) of the data under the assumption of a normal
distribution is maximized. Note that this does not imply in any way that the
temperature measurements are actually drawn from a normal distribution.
Instead, it means that we are attempting to find the Gaussian which fits the
data in the best fashion.
While this distinction may appear subtle, it is critical: we do not assume
that our model accurately reflects reality. Instead, we simply try doing the
best possible job at modeling the data given a specified model class. Later we
will encounter alternative approaches at estimation, namely Bayesian
methods, which make the assumption that our model ought to be able to
describe the data accurately.

Definition 2.16 (Maximum Likelihood Estimator) For a model p(·; θ)


parametrized by θ and observations X the maximum likelihood estimator
(MLE) is
θ̂ML [X] := argmax p(X; θ). (2.51)
θ
2.4 Estimation 67

In the context of exponential families this leads to the following procedure:


given m observations drawn iid from some distribution, we can express the
joint likelihood as
m m
Y Y
p(X; θ) = p(xi; θ) = exp (⟨ φ(xi), θ⟩ − g(θ)) (2.52)
i=1 i=1
= exp (m (⟨ µ[X], θ⟩ − g(θ))) (2.53)
m
1 Σ
where µ[X] := φ(xi). (2.54)
m i=1

Here µ[X] is the empirical average of the map φ(x). Maximization of p(X; θ)
is equivalent to minimizing the negative log-likelihood − log p(X; θ). Thelatter
is a common practical choice since for independently drawn data,the
product of probabilities decomposes into the sum of the logarithms of
individual likelihoods. This leads to the following objective function to be
minimized

— log p(X; θ) = m [g(θ) − ⟨ θ, µ[X]⟩ ] (2.55)

Since g(θ) is convex and ⟨ θ, µ[X]⟩ is linear in θ, it follows that minimization


of (2.55) is a convex optimization problem. Using Theorem 2.14 and the first
order optimality condition ∇ θg(θ) = µ[X] for (2.55) implies that

θ = [∇ θg]− 1 (µ[X]) or equivalently Ex∼ p(x;θ)[φ(x)] = ∇ θg(θ) = µ[X].


(2.56)

Put another way, the above conditions state that we aim to find the distribu-
tion p(x; θ) which has the same expected value of φ(x) as what we observed
empirically via µ[X]. Under very mild technical conditions a solution to (2.56)
exists.
In general, (2.56) cannot be solved analytically. In certain special cases,
though, this is easily possible. We discuss two such choices in the following:
Multinomial and Poisson distributions.

Example 2.6 (Poisson Distribution) For the Poisson distribution1 where


1
p(x; θ) = x! exp(θx − eθ) it follows that g(θ) = eθ and φ(x) = x. This allows

1 Often the Poisson distribution is specified using λ := log θ as its rate parameter. In this case we have
p(x; λ) = λx e −λ /x! as its parametrization. The advantage of the natural parametrization using
θ is that we can directly take advantage of the properties of the log-partition function as
generating the cumulants of x.
68 2 Density Estimation

us to solve (2.56) in closed form using


m m
θ 1 Σ Σ
∇ θ g(θ) = e = x i and hence θ = log x i — log m. (2.57)
m i=1 i=1

Example 2.7 (Multinomial Distribution) For the multinomial distri-


Σ
bution the log-partition function is given by g(θ) = log Ni=1 eθi , hence we
have that
eθi 1 Σm
∇ i g(θ) = ΣN = {xj = i} . (2.58)
j=1 e
θj m j=1

Σ
It is easy to check that (2.58) is satisfied for eθi = mj=1 {xj = i}. In other
words, the MLE for a discrete distribution simply given by the empirical
frequencies of occurrence.
The multinomial setting also exhibits two rather important aspects of ex-
Σ
ponential families: firstly, choosing θi = c + log m i=1
{xj = i} for any c ∈ R
will lead to an equivalent distribution. This is the case since the sufficient
statistic φ(x) is not minimal. In our context this means that the coordinates
Σ
of φ(x) are linearly dependent — for any x we have that j [φ(x)] j = 1,
hence we could eliminate one dimension. This is precisely the additional
degree of freedom which is reflected in the scaling freedom in θ.
Secondly, for data where some events do not occur at all, the expression
hΣ i
m
log j=1
{x j = i} = log 0 is ill defined. This is due to the fact that this

particular set of counts occurs on the boundary of the convex set within which
the natural parameters θ are well defined. We will see how different types of
priors can alleviate the issue.
Using the MLE is not without problems. As we saw in Figure 2.1, conver-
gence can be slow, since we are not using any side information. The latter can
provide us with problems which are both numerically better conditionedand
which show better convergence, provided that our assumptions are ac-
curate. Before discussing a Bayesian approach to estimation, let us discuss
basic statistical properties of the estimator.

2.4.2 Bias, Variance and Consistency


When designing any estimator θˆ(X) we would like to obtain a number of
desirable properties: in general it should not be biased towards a particular
solution unless we have good reason to believe that this solution should be
preferred. Instead, we would like the estimator to recover, at least on
2.4 Estimation 69

average, the “correct” parameter, should it exist. This can be formalized in


the notion of an unbiased estimator.
Secondly, we would like that, even if no correct parameter can be found,
e.g. when we are trying to fit a Gaussian distribution to data which is not
normally distributed, that we will converge to the best possible parameter
choice as we obtain more data. This is what is understood by consistency.
Finally, we would like the estimator to achieve low bias and near-optimal
estimates as quickly as possible. The latter is measured by the efficiency
of an estimator. In this context we will encounter the Cramér-Rao bound
which controls the best possible rate at which an estimator can achieve this
goal. Figure 2.11 gives a pictorial description.

Fig. 2.11. Left: unbiased estimator; the estimates, denoted by circles have as mean
the true parameter, as denoted by a star. Middle: consistent estimator. While the
true model is not within the class we consider (as denoted by the ellipsoid), the
estimates converge to the white star which is the best model within the class that
approximates the true model, denoted by the solid star. Right: different estimators
have different regions of uncertainty, as made explicit by the ellipses around the true
parameter (solid star).

Definition 2.17 (Unbiased Estimator) An estimator θˆ[X] is unbiased


if for all θ where X ∼ p(X; θ) we have EX [θ̂[X]] = θ.

In other words, in expectation the parameter estimate matches the true pa-
rameter. Note that this only makes sense if a true parameter actually exists.
For instance, if the data is Poisson distributed and we attempt modeling it
by a Gaussian we will obviously not obtain unbiased estimates.
For finite sample sizes MLE is often biased. For instance, for the normal
distribution the variance estimates carry bias O(m− 1). See problem 2.19for
details. In general, under fairly mild conditions, MLE is asymptotically
unbiased [DGL96]. We prove this for exponential families. For more general
settings the proof depends on the dimensionality and smoothness of the
family of densities that we have at our disposition.
70 2 Density Estimation

Theorem 2.18 (MLE for Exponential Families) Assume that X is an m-


sample drawn iid from p(x; θ). The estimate θˆ[X] = g− 1(µ[X]) is asymp-
totically normal with
1 −1
m− 2[θˆ[X] − θ] → N(0, ∇ 2g(θ)
θ ). (2.59)

In other words, the estimate θˆ[X] is asymptotically normal, it converges to


the true parameter θ, and moreover, the variance at the correct parameter
is given by the inverse of the covariance matrix of the data, as given by the
second derivative of the log-partition function ∇θ2g(θ).
Proof Denote by µ = ∇ θg(θ) the true mean. Moreover, note that ∇ 2g(θ) θ is the
covariance of the data drawn from p(x; θ). By the central limit theorem (Theorem
1
2.3) we have that n− 2 [µ[X] − µ] → N(0, ∇ 2g(θ)). θ
Now note that θˆ[X] = [∇θg]− 1 (µ[X]). Therefore, by the delta method
(Theorem 2.5) we know that θˆ[X] is also asymptotically normal. Moreover,
by the inverse function theorem the Jacobian of g− 1 satisfies ∇ µ [∇ θg]− 1 (µ) =
−12
∇ θg(θ) . Applying Slutsky’s theorem (Theorem 2.4) proves the claim.
Now that we established the asymptotic properties of the MLE for exponen-
tial families it is only natural to ask how much variation one may expect in
θˆ[X] when performing estimation. The Cramer-Rao bound governs this.

Theorem 2.19 (Cramér and Rao [Rao73]) Assume that X is drawn from
p(X; θ) and let θˆ[X] be an asymptotically unbiased estimator. Denote by I
the Fisher information matrix and by B the variance of θˆ[X] where
h i
I := Cov [∇ θ log p(X; θ)] and B := Var θˆ[X] . (2.60)

In this case det IB ≥ 1 for all estimators θˆ[X].

Proof We prove the claim for the scalar case. The extension to matrices is
straightforward. Using the Cauchy-Schwarz inequality we have
h i h i
Cov2 ∇ θ log p(X; θ), θˆ[X] ≤ Var [∇ θ log p(X; θ)] Var θˆ[X] = IB. (2.61)

Note that at the true parameter the expected log-likelihood score vanishes

EX[∇ θ log p(X; θ)] = ∇ θ p(X; θ)dX = ∇ θ1 = 0. (2.62)
2.4 Estimation 71

Hence we may simplify the covariance formula by dropping the means via
h i h i
Cov ∇ θ log p(X; θ), θ̂[X] = EX ∇ θ log p(X; θ)θ̂[X]

= p(X; θ)θ̂(X)∇ θ log p(X; θ)dθ

= ∇ θ p(X; θ)θ̂(X)dX = ∇ θ θ = 1.

Here the last equality follows since we may interchange integration by X


and the derivative with respect to θ.
The Cramér-Rao theorem implies that there is a limit to how well we may
estimate a parameter given finite amounts of data. It is also a yardstick by
which we may measure how efficiently an estimator uses data. Formally, we
define the efficiency as the quotient between actual performance and the
Cramér-Rao bound via
e := 1/det IB. (2.63)
The closer e is to 1, the lower the variance of the corresponding estimator
θˆ(X). Theorem 2.18 implies that for exponential families MLE is asymptot-
ically efficient. It turns out to be generally true.

Theorem 2.20 (Efficiency of MLE [Cra46, GW92, Ber85]) The max-


imum likelihood estimator is asymptotically efficient (e = 1).
So far we only discussed the behavior of θˆ[X] whenever there exists a true θ
generating p(θ; X). If this is not true, we need to settle for less: how well θˆ[X]
approaches the best possible choice of within the given model class. Such
behavior is referred to as consistency. Note that it is not possible to define
consistency per se. For instance, we may ask whether θˆ converges to the optimal
parameter θ , or whether p(x; θˆ) converges to the optimal densityp(x; θ ),
∗ ∗

and with respect to which norm. Under fairly general conditions this turns
out to be true for finite-dimensional parameters and smoothly parametrized
densities. See [DGL96, vdG00] for proofs and further details.

2.4.3 A Bayesian Approach


The analysis of the Maximum Likelihood method might suggest that in-
ference is a solved problem. After all, in the limit, MLE is unbiased and it
exhibits as small variance as possible. Empirical results using a finite amount
of data, as present in Figure 2.1 indicate otherwise.
While not making any assumptions can lead to interesting and general
72 2 Density Estimation

theorems, it ignores the fact that in practice we almost always have some
idea about what to expect of our solution. It would be foolish to ignore such
additional information. For instance, when trying to determine the voltage of
a battery, it is reasonable to expect a measurement in the order of 1.5Vor
less. Consequently such prior knowledge should be incorporated into the
estimation process. In fact, the use of side information to guide estimation
turns out to be the tool to building estimators which work well in high
dimensions.
Recall Bayes’ rule (1.15) which states that p(θ|x) = p(x| θ)p(θ) . In our con- text
p(x)
this means that if we are interested in the posterior probability of θ
assuming a particular value, we may obtain this using the likelihood (often
referred to as evidence) of x having been generated by θ via p(x|θ) and our
prior belief p(θ) that θ might be chosen in the distribution generating x.
Observe the subtle but important difference to MLE: instead of treating θ
as a parameter of a density model, we treat θ as an unobserved random
variable which we may attempt to infer given the observations X.
This can be done for a number of different purposes: we might want to
infer the most likely value of the parameter given the posterior distribution
p(θ|X). This is achieved by
θ̂MAP (X) := argmax p(θ|X) = argmin − log p(X|θ) − log p(θ). (2.64)
θ θ

The second equality follows since p(X) does not depend on θ. This estimator
is also referred to as the Maximum a Posteriori, or MAP estimator. It differs
from the maximum likelihood estimator by adding the negative log-prior
to the optimization problem. For this reason it is sometimes also referred
to as Penalized MLE. Effectively we are penalizing unlikely choices θ via
− log p(θ).
Note that using θ̂MAP (X) as the parameter of choice is not quite accurate.
After all, we can only infer a distribution over θ and in general there is no
guarantee that the posterior is indeed concentrated around its mode. A more
accurate treatment is to use the distribution p(θ|X) directly via

p(x|X) = p(x|θ)p(θ|X)dθ. (2.65)

In other words, we integrate out the unknown parameter θ and obtain the
density estimate directly. As we will see, it is generally impossible to solve
(2.65) exactly, an important exception being conjugate priors. In the other
cases one may resort to sampling from the posterior distribution to approx-
imate the integral.
While it is possible to design a wide variety of prior distributions, this book
2.4 Estimation 73

focuses on two important families: norm-constrained prior and conjugate


priors. We will encounter them throughout, the former sometimes in the
guise of regularization and Gaussian Processes, the latter in the context of
exchangeable models such as the Dirichlet Process.
Norm-constrained priors take on the form

p(θ) ∝ exp(−λ ǁθ − θ0ǁd)p for p, d ≥ 1 and λ > 0. (2.66)

That is, they restrict the deviation of the parameter value θ from some guess
θ0. The intuition is that extreme values of θ are much less likely than more
moderate choices of θ which will lead to more smooth and even distributions
p(x|θ).
A popular choice is the Gaussian prior which we obtain for p = d = 1
and λ = 1/2σ2. Typically one sets θ0 = 0 in this case. Note that in (2.66)
we did not spell out the normalization of p(θ) — in the context of MAP
estimation this is not needed since it simply becomes a constant offset in
the optimization problem (2.64). We have

θ̂MAP [X] = argmin m [g(θ) − ⟨ θ, µ[X]⟩ ] + λ ǁθ − θ0 ǁdp (2.67)

For d, p ≥ 1 and λ ≥ 0 the resulting optimization problem is convex and it


has a unique solution. Moreover, very efficient algorithms exist to solve this
problem. We will discuss this in detail in Chapter 3. Figure 2.12 shows the
regions of equal prior probability for a range of different norm-constrained
priors.
As can be seen from the diagram, the choice of the norm can have profound
consequences on the solution. That said, as we will show in Chapter ??, the
estimate θ̂MAP is well concentrated and converges to the optimal solution
under fairly general conditions.
An alternative to norm-constrained priors are conjugate priors. They are
designed such that the posterior p(θ|X) has the same functional form as the
prior p(θ). In exponential families such priors are defined via

p(θ|n, ν) = exp (⟨ nν, θ⟩ − ng(θ) − h(ν, n)) where (2.68)



h(ν, n) = log exp (⟨ nν, θ⟩ − ng(θ)) dθ. (2.69)

Note that p(θ|n, ν) itself is a member of the exponential family with the feature
map φ(θ) = (θ, −g(θ)). Hence h(ν, n) is convex in (nν, n). Moreover, the
posterior distribution has the form

p(θ|X) ∝ p(X|θ)p(θ|n, ν) ∝ exp (⟨ mµ[X] + nν, θ⟩ − (m + n)g(θ)) . (2.70)


74 2 Density Estimation

Fig. 2.12. From left to right: regions of equal prior probability in R2 for priors using
the l1, l2 and l ∞ norm. Note that only the l2 norm is invariant with regard to the
coordinate system. As we shall see later, the l1 norm prior leads to solutions where
only a small number of coordinates is nonzero.

That is, the posterior distribution has the same form as a conjugate prior with
parameters mµ[X]+nν m+n
and m + n. In other words, n acts like a phantom sample size
and ν is the corresponding mean parameter. Such an interpreta- tion is
reasonable given our desire to design a prior which, when combinedwith the
likelihood remains in the same model class: we treat prior knowl-edge as
having observed virtual data beforehand which is then added to the actual set
of observations. In this sense data and prior become completelyequivalent
— we obtain our knowledge either from actual observations or from virtual
observations which describe our belief into how the data gen- eration process is
supposed to behave.
Eq. (2.70) has the added benefit of allowing us to provide an exact nor-
malized version of the posterior. Using (2.68) we obtain that

mµ[X]+nν
p(θ|X) = exp ⟨ mµ[X] + nν, θ⟩ − (m + n)g(θ) − h m+n
,m +n .

The main remaining challenge is to compute the normalization h for a range


of important conjugate distributions. The table on the following page pro-
vides details. Besides attractive algebraic properties, conjugate priors also
have a second advantage — the integral (2.65) can be solved exactly:

p(x|X) = exp (⟨ φ(x), θ⟩ − g(θ)) ×
mµ[X]+nν
exp ⟨ mµ[X] + nν, θ⟩ − (m + n)g(θ) − h m+n
,m +n dθ

Combining terms one may check that the integrand amounts to the normal-
2.4 Estimation 75

ization in the conjugate distribution, albeit φ(x) added. This yields


mµ[X]+nν+φ(x) mµ[X]+nν
p(x|X) = exp h m+n+1
,m +n+1 −h m+n
,m +n

Such an expansion is very useful whenever we would like to draw x from


p(x|X) without the need to obtain an instantiation of the latent variable θ. We
provide explicit expansions in appendix 2. [GS04] use the fact that θcan
be integrated out to obtain what is called a collapsed Gibbs sampler for topic
models [BNJ03].

2.4.4 An Example
Assume we would like to build a language model based on available doc-
uments. For instance, a linguist might be interested in estimating the fre-
quency of words in Shakespeare’s collected works, or one might want to
compare the change with respect to a collection of webpages. While mod- els
describing documents by treating them as bags of words which all have been
obtained independently of each other are exceedingly simple, they are
valuable for quick-and-dirty content filtering and categorization, e.g. a spam
filter on a mail server or a content filter for webpages.
Hence we model a document d as a multinomial distribution: denote by
wi for i ∈ {1, . . . , md} the words in d. Moreover, denote by p(w|θ) the probability
of occurrence of word w, then under the assumption that the words are
independently drawn, we have
Y
md
p(d|θ) = p(w i |θ). (2.71)
i=1

It is our goal to find parameters θ such that p(d|θ) is accurate. For a given
collection D of documents denote by mw the number of counts for word w in
the entire collection. Moreover, denote by m the total number of wordsin
the entire collection. In this case we have
Y Y
p(D|θ) = p(di |θ) = p(w|θ)mw . (2.72)
i w

Finding suitable parameters θ given D proceeds as follows: In a maximum


likelihood model we set
mw
p(w| θ) = . (2.73)
m
In other words, we use the empirical frequency of occurrence as our best
guess and the sufficient statistic of D is φ(w) = ew, where ew denotes the unit
mw
vector which is nonzero only for the “coordinate” w. Hence µ[D]w = m .
76 2 Density Estimation

We know that the conjugate prior of the multinomial model is a Dirichlet


model. It follows from (2.70) that the posterior mode is obtained by replacing
µ[D] by mµ[D]+nν
m+n
. Denote by nw := ν w · n the pseudo-counts arising fromthe
conjugate prior with parameters (ν, n). In this case we will estimate the
probability of the word w as
m + n w mw + nνw
p(w|θ) = w = . (2.74)
m+n m+n
In other words, we add the pseudo counts n w to the actual word counts mw.
This is particularly useful when the document we are dealing with is brief,
that is, whenever we have little data: it is quite unreasonable to infer from
a webpage of approximately 1000 words that words not occurring in this page
have zero probability. This is exactly what is mitigated by means of the
conjugate prior (ν, n).
Finally, let us consider norm-constrained priors of the form (2.66). In this
case, the integral required for

p(D) = p(D|θ)p(θ)dθ

∝ exp −λ ǁθ − θ0ǁd + m ⟨ µ[D], θ⟩ − mg(θ) dθ
p

is intractable and we need to resort to an approximation. A popular choice is


∗ ∗
to replace the integral by p(D|θ ) where θ maximizes the integrand. Thisis
precisely the MAP approximation of (2.64). Hence, in order to perform
estimation we need to solve
λ
minimize g(θ) − ⟨ µ[D], θ⟩ + ǁθ − θ 0ǁdp. (2.75)
θ m
A very simple strategy for minimizing (2.75) is gradient descent. That is for
a given value of θ we compute the gradient of the objective function and take
a fixed step towards its minimum. For simplicity assume that d = p = 2 and
λ = 1/2σ2, that is, we assume that θ is normally distributed with variance
σ2 and mean θ0. The gradient is given by
1
∇ θ [− log p(D, θ)] = Ex∼p(x|θ) [φ(x)] − µ[D] + [θ − θ0 ] (2.76)
mσ 2
In other words, it depends on the discrepancy between the mean of φ(x) with
respect to our current model and the empirical average µ[X], and the
difference between θ and the prior mean θ0.
Unfortunately, convergence of the procedure θ ← θ − η∇ θ [. . .] is usually very
slow, even if we adjust the steplength η efficiently. The reason is thatthe
gradient need not point towards the minimum as the space is most likely
2.5 Sampling 77

distorted. A better strategy is to use Newton’s method (see Chapter 3 for


a detailed discussion and a convergence proof). It relies on a second order
Taylor approximation
1 T
— log p(D, θ + δ) ≈ − log p(D, θ) + ⟨ δ, G⟩ δ Hδ (2.77)
2
+

where G and H are the first and second derivatives of − log p(D, θ) with respect
to θ. The quadratic expression can be minimized with respect to δby choosing
δ = −H 1G and we can fashion an update algorithm from thisby letting θ ←

θ − H G. One may show (see Chapter 3) that Algorithm 2.1 is quadratically


−1

convergent. Note that the prior on θ ensures that H is well conditioned even in
the case where the variance of φ(x) is not. In practice this means that the prior
ensures fast convergence of the optimization algorithm.

Algorithm 2.1 Newton method for MAP estimation


NewtonMAP(D)
Initialize θ = θ0
while not converged do
Compute G = Ex∼p(x|θ) [φ(x)] − µ[D] + 1mσ
2 [θ − θ0 ] Compute

H = Varx∼p(x|θ) [φ(x)] + 1 2 1 mσ
Update θ ← θ − H 1G

end while
return θ

2.5 Sampling
So far we considered the problem of estimating the underlying probability
density, given a set of samples drawn from that density. Now let us turn to
the converse problem, that is, how to generate random variables given the
underlying probability density. In other words, we want to design a random
variable generator. This is useful for a number of reasons:
We may encounter probability distributions where optimization over suit-
able model parameters is essentially impossible and where it is equally im-
possible to obtain a closed form expression of the distribution. In these cases
it may still be possible to perform sampling to draw examples of the kind
of data we expect to see from the model. Chapter ?? discusses a number of
graphical models where this problem arises.
Secondly, assume that we are interested in testing the performance of a
network router under different load conditions. Instead of introducing the under-
development router in a live network and wreaking havoc, one could
78 2 Density Estimation

estimate the probability density of the network traffic under various load
conditions and build a model. The behavior of the network can then be
simulated by using a probabilistic model. This involves drawing random
variables from an estimated probability distribution.
Carrying on, suppose that we generate data packets by sampling and see
an anomalous behavior in your router. In order to reproduce and debug
this problem one needs access to the same set of random packets which
caused the problem in the first place. In other words, it is often convenient
if our random variable generator is reproducible; At first blush this seems like
a contradiction. After all, our random number generator is supposedto
generate random variables. This is less of a contradiction if we consider how
random numbers are generated in a computer — given a particular
initialization (which typically depends on the state of the system, e.g. time,
disk size, bios checksum, etc.) the random number algorithm produces a
sequence of numbers which, for all practical purposes, can be treated as iid.
A simple method is the linear congruential generator [PTVF94]
xi+1 = (axi + b) mod c.
The performance of these iterations depends significantly on the choice of the
constants a, b, c. For instance, the GNU C compiler uses a = 1103515245, b =
12345 and c = 232. In general b and c need to be relatively prime and a − 1
needs to be divisible by all prime factors of c and by 4. It is very much
advisable not to attempt implementing such generators on one’s own unless
it is absolutely necessary.
Useful desiderata for a pseudo random number generator (PRNG) are that
for practical purposes it is statistically indistinguishable from a sequence of
iid data. That is, when applying a number of statistical tests, we will accept
the null-hypothesis that the random variables are iid. See Chapter ?? for
a detailed discussion of statistical testing procedures for random variables.
In the following we assume that we have access to a uniform RNG U [0, 1]
which draws random numbers uniformly from the range [0, 1].

2.5.1 Inverse Transformation


We now consider the scenario where we would like to draw from some dis-
tinctively non-uniform distribution. Whenever the latter is relatively simple
this can be achieved by applying an inverse transform:

Theorem 2.21 For z ∼ p(z) with z ∈ Z and an injective transformation


φ : Z → X with inverse transform φ− 1 on φ(Z) it follows that the random
2.5 Sampling 79
Discrete Probability Distribution Cumulative Density Function

1
0.3
0.8

0.2 0.6

0.4
0.1
0.2

0 0
1 2 3 4 5 1 2 3 4 5 6

Fig. 2.13. Left: discrete probability distribution over 5 possible outcomes. Right:
associated cumulative distribution function. When sampling, we draw x uniformly
at random from U [0, 1] and compute the inverse of F .

variable x := φ(z) is drawn from. ∇ xφ− 1(x) . · p(φ− 1(x)). Here .∇ xφ− 1(x) .

denotes the determinant of the Jacobian of φ 1.

This follows immediately by applying a variable transformation for a mea-


sure, i.e. we change dp(z) to dp(φ 1(x)) ∇.xφ 1(x) . Such
. a conversion strat- egy
− −

is particularly useful for univariate distributions.

Corollary 2.22 Denote by∫ p(x)



a distribution on R with cumulative distri-
bution function F (xJ) = x dp(x). Then the transformation x = φ(z) =

− ∞∼
F − 1(z) converts samples z U [0, 1] to samples drawn from p(x).
We now apply this strategy to a number of univariate distributions. One of
the most common cases is sampling from a discrete distribution.

Example 2.8 (Discrete Distribution) In the case of a discrete distribu- tion


over {1, . . . , k} the cumulative distribution function is a step-function with
steps at {1, . . . , k} where the height of each step is given by the corre- sponding
probability of the event.
The implementation works as follows: denote by p ∈ [0, 1]k the vector of
probabilities and denote by f ∈ [0, 1]k with fi = fi− 1 + pi and f1 = p1 the steps
of the cumulative distribution function. Then for a random variable z drawn
from U [0, 1] we obtain x = φ(z) := argmini {fi ≥ z}. See Figure 2.13for an
example of a distribution over 5 events.
80 2 Density Estimation
Exponential Distribution Cumulative Distribution Function
1
1
0.8
0.8

0.6
0.6

0.4
0.4

0.2 0.2

0 0
0 2 4 6 8 10 0 2 4 6 8 10

Fig. 2.14. Left: Exponential distribution with λ = 1. Right: associated cumulative


distribution function. When sampling, we draw x uniformly at random from U [0, 1]
and compute the inverse.

Example 2.9 (Exponential Distribution) The density of a Exponential-


distributed random variable is given by

p(x|λ) = λ exp(−λx) if λ > 0 and x ≥ 0. (2.78)

This allows us to compute its cdf as

F (x|λ) = 1 − exp(−λx)if λ > 0 for x ≥ 0. (2.79)

Therefore to generate a Exponential random variable we draw z ∼ U [0, 1]and


solve x = φ(z) = F − 1(z|λ) = −λ− 1 log(1 − z). Since z and 1 − z are drawn from
U [0, 1] we can simplify this to x = −λ− 1 log z.

We could apply the same reasoning to the normal distribution in order to


draw Gaussian random variables. Unfortunately, the cumulative distribution
function of the Gaussian is not available in closed form and we would need
resort to rather nontrivial numerical techniques. It turns out that there exists
a much more elegant algorithm which has its roots in Gauss’ proof of the
normalization constant of the Normal distribution. This technique is known
as the Box-Müller transform.

Example 2.10 (Box-Müller Transform) Denote by X, Y independent Gaus-


sian random variables with zero mean and unit variance. We have
1 − 1 x2 1 — 1 y2 1 − 1 (x2+y2)
p(x, y) = √ e 2 √ e 2 = e 2 (2.80)
2π 2π 2π
2.5 Sampling 81

0.45

0.40

0.35

0.30

0.25

0.20

0.15

0.10

0.05

0.00
4 3 2 1 0 1 2 3 4 5

Fig. 2.15. Red: true density of the standard normal distribution (red line) is con-
trasted with the histogram of 20,000 random variables generated by the Box-Müller
transform.

The key observation is that the joint distribution p(x, y) is radially symmet- ric,
i.e. it only depends on the radius r2 = x2 + y2. Hence we may performa
variable substitution in polar coordinates via the map φ where
x = r cos θ and y = r sin θ hence (x, y) = φ− 1(r, θ). (2.81)
This allows us to express the density in terms of (r, θ) via
. = r e− r2 .
1
p(r, θ) = p(φ 1(r, θ)) . ∇

1 − 1 r2 . cos θ sin θ
r,θ
φ (r, θ). =
−1
e
2 2
2π . −r sin θ r cos θ . 2π
The fact that p(r, θ) is constant in θ means that we can easily sample θ ∈
[0, 2π] by drawing a random variable, say zθ from U [0, 1] and rescaling it with
2π. To obtain a sampler for r we need to compute the cumulative distribution
1 2
function for p(r) = re− r2 :
∫ r′
1 2 1 ′2 √
J
F (r ) = re− 2r dr = 1 − e — 2r and hence r = F − 1(z) = −2 log(1 − z).
0
(2.82)
Observing that z ∼ U [0, 1] implies that 1 − z ∼ U [0, 1] yields the following
sampler: draw zθ, zr ∼ U [0, 1] and compute x and y by
√ √
x = −2 log zr cos 2πzθ and y = −2 log zr sin 2πzθ.
Note that the Box-Müller transform yields two independent Gaussian ran-
dom variables. See Figure 2.15 for an example of the sampler.
82 2 Density Estimation

Example 2.11 (Uniform distribution on the disc) A similar strategy can


be employed when sampling from the unit disc. In this case the closed- form
expression of the distribution is simply given by
(
1
if x2 + y2 ≤ 1
p(x, y) = π (2.83)

0 otherwise
Using the variable transform (2.81) yields
(r
if r ≤ 1
(r, θ)) .∇ r,θφ (r, θ). =
−1 −1 π
p(r, θ) = p(φ (2.84)

0 otherwise

Integrating out θ yields p(r) = 2r for r ∈ [0, 1] with corresponding CDF


F (r) = r2 for r ∈ [0, 1]. Hence our sampler draws zr, zθ ∼ U [0, 1] and then
√ √
computes x = zr cos 2πzθ and y = zr sin 2πzθ.

2.5.2 Rejection Sampler


All the methods for random variable generation that we looked at so far re-
quire intimate knowledge about the pdf of the distribution. We now describe
a general purpose method, which can be used to generate samples from an
arbitrary distribution. Let us begin with sampling from a set:

Example 2.12 (Rejection Sampler) Denote by X ⊆ X a set and let p be a


density on X. Then a sampler for drawing from p X(x) ∝ p(x) for x ∈ X and
pX(x) = 0 for x /∈ X, that is, pX(x) = p(x|x ∈ X) is obtained by the
procedure:
repeat
draw x ∼ p(x)
until x ∈ X
return x
That is, the algorithm keeps on drawing from p until the random variable is
contained in X. The probability that this occurs is clearly p(X). Hence the
larger p(X) the higher the efficiency of the sampler. See Figure 2.16.

Example 2.13 (Uniform distribution on a disc) The procedure works


trivially as follows: draw x, y ∼ U [0, 1]. Accept if (2x − 1)2 + (2y − 1) 2 ≤ 1
and return sample (2x − 1, 2y − 1). This sampler has efficiency 4 πsince this
is the surface ratio between the unit square and the unit ball.
Note that this time we did not need to carry out any sophisticated measure
2.5 Sampling 83

Fig. 2.16. Rejection sampler. Left: samples drawn from the uniform distribution on [0,
1]2. Middle: the samples drawn from the uniform distribution on the unit disc are all
the points in the grey shaded area. Right: the same procedure allows us to sample
uniformly from arbitrary sets.

3. 0

2. 5

2. 5

2. 0

2. 0

1. 5

1. 5

1. 0
1. 0

0. 5
0. 5

0. 0 0. 0
0. 0 0. 2 0. 4 0. 6 0. 8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0

Fig. 2.17. Accept reject sampling for the Beta(2, 5) distribution. Left: Samples are
generated uniformly from the blue rectangle (shaded area). Only those samples
which fall under the red curve of the Beta(2, 5) distribution (darkly shaded area)
are accepted. Right: The true density of the Beta(2, 5) distribution (red line) is
contrasted with the histogram of 10,000 samples drawn by the rejection sampler.

transform. This mathematical convenience came at the expense of a slightly


less efficient sampler — about 21% of all samples are rejected.

The same reasoning that we used to obtain a hard accept/reject procedure


can be used for a considerably more sophisticated rejection sampler. The
basic idea is that if, for a given distribution p we can find another distribution
q which, after rescaling, becomes an upper envelope on p, we can use q to
sample from and reject depending on the ratio between q and p.

Theorem 2.23 (Rejection Sampler) Denote by p and q distributions on


X and let c be a constant such that such that cq(x) ≥ p(x) for all x ∈ X.
84 2 Density Estimation

Then the algorithm below draws from p with acceptance probability c− 1.

repeat
draw x ∼ q(x) and t ∼ U [0, 1]
until ct ≤ p(x)
q(x)
return x

Proof Denote by Z the event that the sample drawn from q is accepted.
Then by Bayes rule the probability Pr(x|Z) can be written as follows
cq(x)
p(x)
· q(x)
Pr(Z|x) Pr(x) = p(x) (2.85)
Pr(x| Z) = = c− 1
Pr(Z)
∫ ∫
Here we used that Pr(Z) = Pr(Z|x)q(x)dx = c− 1p(x)dx = c− 1.
Note that the algorithm of Example 2.12 is a special case of such a rejection
1
sampler — we majorize pX by the uniform distribution rescaled by p(X) .

Example 2.14 (Beta distribution) Recall that the Beta(a, b) distribution,


as a member of the Exponential Family with sufficient statistics (log x, log(1−
x)), is given by
Γ(a + b) a− 1
p(x |a, b) = x (1 − x)b− 1, (2.86)
Γ(a)Γ(b)
For given (a, b) one can verify (problem 2.25) that
a−1
M := argmax p(x|a, b) = . (2.87)
x a+b−2
provided a > 1. Hence, if we use as proposal distribution the uniform distri-
bution U [0, 1] with scaling factor c = p(M |a, b) we may apply Theorem 2.23.
As illustrated in Figure 2.17, to generate a sample from Beta(a, b) we first
generate a pair (x, t), uniformly at random from the shaded rectangle. A sample
is retained if ct ≤ p(x|a, b), and rejected otherwise. The acceptance
rate of this sampler is 1c .

Example 2.15 (Normal distribution) We may use the Laplace distri-


bution to generate samples from the Normal distribution. That is, we use
λ − ||
q(x| λ) = e λ x (2.88)
2
as the proposal distribution. For a normal distribution p = N(0, 1) with zero
2.5 Sampling 85

mean and unit variance it turns out that choosing λ = 1 yields the most
efficient sampling scheme (see Problem 2.27) with
r
2e
p(x) ≤ q(x|λ = 1)
π
As illustrated in Figure 2.18, we first generate x ∼ q(x|λ = 1) using the
inverse transfo√ rm method (see Example 2.9 and Problem 2.21) and t ∼
U [0, 1]. If t ≤ 2e/πp(x)
√π we accept x, otherwise we reject it. The efficiency
of this scheme is 2e .

0.6

0.4

0.2

0
−4 −2 0 2 4

Fig. 2.18. Rejection sampling for the Normal distribution (red curve).
√ Samples are
generated uniformly from the Laplace distribution rescaled by 2e/π. Only those
samples which fall under the red curve of the standard normal distribution (darkly
shaded area) are accepted.

While rejection sampling is fairly efficient in low dimensions its efficiency is


unsatisfactory in high dimensions. This leads us to an instance of the curse of
dimensionality [Bel61]: the pdf of a d-dimensional Gaussian random variable
centered at 0 with variance σ2 1 is given by
d 1 2
p(x|σ 2 ) = (2π)−2 σ − d e− 2σ2 ǁxǁ
Now suppose that we want to draw from p(x| σ2) by sampling from another
Gaussian q with slightly larger variance ρ2 > σ2. In this case the ratio between
both distributions is maximized at 0 and it yields
q(0| σ2) h ρ id
c= =
p(0|ρ2) σ
86 2 Density Estimation

If suppose ρσ = 1.01, and d = 1000, we find that c ≈ 20960. In other words, we


need to generate approximately 21,000 samples on the average from q to
draw a single sample from p. We will discuss a more sophisticated sampling
algorithms, namely Gibbs Sampling, in Section ??. It allows us to draw from
rather nontrivial distributions as long as the distributions in small subsets
of random variables are simple enough to be tackled directly.

Problems
Problem 2.1 (Bias Variance Decomposition {1}) Prove that the vari-
ance VarX [x] of a random variable can be written as EX [x2 ] − EX [x]2 .

Problem 2.2 (Moment Generating Function {2}) Prove that the char-
acteristic function can be used to generate moments as given in (2.12). Hint:
use the Taylor expansion of the exponential and apply the differential oper-
ator before the expectation.

Problem 2.3 (Cumulative Error Function {2})


√ ∫ x
e− x dx.
2
erf(x) = 2/π (2.89)
0

Problem 2.4 (Weak Law of Large Numbers {2}) In analogy to the proof
of the central limit theorem prove the weak law of large numbers. Hint: use
a first order Taylor expansion of eiωt = 1 + iωt + o(t) to compute an approx-
imation of the characteristic function. Next compute the limit m → ∞ for
φX̄m . Finally, apply the inverse Fourier transform to associate the constant
distribution at the mean µ with it.

Problem 2.5 (Rates and confidence bounds {3}) Show that the rate
of hoeffding is tight — get bound from central limit theorem and compare to
the hoeffding rate.

Problem 2.6 Why can’t we just use each chip on the wafer as a random
variable? Give a counterexample. Give bounds if we actually were allowed to
do this.

Problem 2.7 (Union Bound) Work on many bounds at the same time.
We only have logarithmic penalty.

Problem 2.8 (Randomized Rounding {4}) Solve the linear system of


equations Ax = b for integral x.
2.5 Sampling 87

Problem 2.9 (Randomized Projections {3}) Prove that the random- ized
projections converge.

Problem 2.10 (The Count-Min Sketch {5}) Prove the projection trick

Problem 2.11 (Parzen windows with triangle kernels {1}) Suppose you
are given the following data: X = {2, 3, 3, 5, 5}. Plot the estimated den-sity
using a kernel density estimator with the following kernel:
(
0.5 − 0.25 ∗ |u| if |u| ≤ 2
k(u) =
0 otherwise.

Problem 2.12 Gaussian process link with Gaussian prior on natural pa-
rameters

Problem 2.13 Optimization for Gaussian regularization

Problem 2.14 Conjugate prior (student-t and wishart).

Problem 2.15 (Multivariate Gaussian {1}) Prove that Σ > 0 is a nec- essary
and sufficient condition for the normal distribution to be well defined.

Problem 2.16 (Discrete Exponential Distribution {2}) φ(x) = x and


uniform measure.

Problem 2.17 Exponential random graphs.

Problem 2.18 (Maximum Entropy Distribution) Show that exponen- tial


families arise as the solution of the maximum entropy estimation prob- lem.

Problem 2.19 (Maximum Likelihood Estimates for Normal Distributions)


Derive the maximum likelihood estimates for a normal distribution, that is,
show that they result in
1 Σ
m
2 1 Σ
m
µ̂ = xi and σ̂ = (xi — µ̂)2 (2.90)
m i=1
m i=1

using the exponential families parametrization. Next show that while the
mean estimate µ̂ is unbiased, the variance estimate has a slight bias of O( 1m).
To see this, take the expectation with respect to σ̂ 2 .
88 2 Density Estimation

Problem 2.20 (cdf of Logistic random variable {1}) Show that the cdfof
the Logistic random variable (??) is given by (??).

Problem 2.21 (Double-exponential (Laplace) distribution {1}) Use the


inverse-transform method to generate a sample from the double-exponential
(Laplace) distribution (2.88).

Problem 2.22 (Normal random variables in polar coordinates {1})


If X1 and X2 are standard normal random variables and let (R, θ) de-
note the polar coordinates of the pair (X1, X2). Show that R2 ∼ χ22 and
θ ∼ Unif[0, 2π].

Problem 2.23 (Monotonically increasing mappings {1}) A mapping T


: R → R is one-to-one if, and only if, T is monotonically increasing, that is,
x > y implies that T (x) > T (y).

Problem 2.24 (Monotonically increasing multi-maps {2}) Let T : Rn →


Rn be one-to-one. If X ∼ p X(x), then show that the distribution pY (y) of
Y = T (X) can be obtained via (??).

Problem 2.25 (Argmax of the Beta(a, b) distribution {1}) Show that the
mode of the Beta(a, b) distribution is given by (2.87).

Problem 2.26 (Accept reject sampling for the unit disk {2}) Give at
least TWO different accept-reject based sampling schemes to generate sam-
ples uniformly at random from the unit disk. Compute their efficiency.

Problem 2.27 (Optimizing Laplace for Standard Normal {1}) Optimize


the ratio p(x)/g(x|µ, σ), with respect to µ and σ, where p(x) is the standard
normal distribution (??), and g(x|µ, σ) is the Laplace distribution (2.88).

Problem 2.28 (Normal Random Variable Generation {2}) The aimof


this problem is to write code to generate standard normal random vari- ables
(??) by using different methods. To do this generate U ∼ Unif[0, 1] and apply
(i) the Box-Muller transformation outlined in Section ??.
(ii) use the following approximation to the inverse CDF
a 0 + a 1t
Φ− 1(α) ≈ t− , (2.91)
1 + b1 t + b2t 2
2.5 Sampling 89
−2
where t2 = log(α ) and
a0 = 2.30753, a1 = 0.27061, b1 = 0.99229, b2 = 0.04481
(iii) use the method outlined in example 2.15.
Plot a histogram of the samples you generated to confirm that they are nor-
mally distributed. Compare these different methods in terms of the time
needed to generate 1000 random variables.

Problem 2.29 (Non-standard Normal random variables {2}) Describe


a scheme based on the Box-Muller transform to generate d dimensional nor-
mal random variables p(x|0, I). How can this be used to generate arbitrary
normal random variables p(x|µ, Σ).

Problem 2.30 (Uniform samples from a disk {2}) Show how the ideas
described in Section ?? can be generalized to draw samples uniformly at ran-
x2 x2
dom from an axis parallel ellipse: {(x, y) : a21 + b22 ≤ 1}.
3

Optimization

Optimization plays an increasingly important role in machine learning. For


instance, many machine learning algorithms minimize a regularized risk
functional:
min J(f ) := λΩ(f ) + Remp(f ), (3.1)
f

with the empirical risk


1 Σ
m
Remp (f ) := l(f (x i), y i). (3.2)
m i=1

Here xi are the training instances and yi are the corresponding labels. l the
loss function measures the discrepancy between y and the predictions f (xi).
Finding the optimal f involves solving an optimization problem.
This chapter provides a self-contained overview of some basic concepts and
tools from optimization, especially geared towards solving machine learning
problems. In terms of concepts, we will cover topics related to convexity,
duality, and Lagrange multipliers. In terms of tools, we will cover a variety
of optimization algorithms including gradient descent, stochastic gradient
descent, Newton’s method, and Quasi-Newton methods. We will also look
at some specialized algorithms tailored towards solving Linear Programming
and Quadratic Programming problems which often arise in machine learning
problems.

3.1 Preliminaries
Minimizing an arbitrary function is, in general, very difficult, but if the ob-
jective function to be minimized is convex then things become considerably
simpler. As we will see shortly, the key advantage of dealing with convex
functions is that a local optima is also the global optima. Therefore, well
developed tools exist to find the global minima of a convex function. Conse-
quently, many machine learning algorithms are now formulated in terms of
convex optimization problems. We briefly review the concept of convex sets
and functions in this section.

91
92 3 Optimization

3.1.1 Convex Sets


Definition 3.1 (Convex Set) A subset C of Rn is said to be convex if
(1 − λ)x + λy ∈ C whenever x ∈ C, y ∈ C and 0 < λ < 1.

Intuitively, what this means is that the line joining any two points x and y from
the set C lies inside C (see Figure 3.1). It is easy to see (Exercise 3.1) that
intersections of convex sets are also convex.

Fig. 3.1. The convex set (left) contains the line joining any two points that belong
to the set. A non-convex set (right) does not satisfy this property.

Σ Σ
A vector sum i λix i is called a convex combination if λi ≥ 0 and i λi =
1. Convex combinations are helpful in defining a convex hull:

Definition 3.2 (Convex Hull) The convex hull, conv(X), of a finite sub-
set X = {x1, . . . , xn} of Rn consists of all convex combinations of x1, . . . , x n.

3.1.2 Convex Functions


Let f be a real valued function defined on a set X ⊂ Rn. The set

{(x, µ) : x ∈ X, µ ∈ R, µ ≥ f (x)} (3.3)

is called the epigraph of f . The function f is defined to be a convex function


if its epigraph is a convex set in Rn+1. An equivalent, and more commonly
used, definition (Exercise 3.5) is as follows (see Figure 3.2 for geometric
intuition):

Definition 3.3 (Convex Function) A function f defined on a set X is


called convex if, for any x, xJ ∈ X and any 0 < λ < 1 such that λx + (1 −
λ)xJ ∈ X, we have

f (λx + (1 − λ)xJ) ≤ λf (x) + (1 − λ)f (xJ). (3.4)


3.1 Preliminaries 93

A function f is called strictly convex if


f (λx + (1 − λ)xJ) < λf (x) + (1 − λ)f (xJ) (3.5)
/ x.
whenever x = J

In fact, the above definition can be extended to show that if f is a convex


Σ
function and λi ≥ 0 with i λi = 1 then
!
Σ Σ
f λixi ≤ λif (xi). (3.6)
i i

The above inequality is called the Jensen’s inequality (problem ).

1.5

1.0

0.5
f(x)

f(x)

0.0

0.5

1.0

1.5
x x

Fig. 3.2. A convex function (left) satisfies (3.4); the shaded region denotes its epi-
graph. A nonconvex function (right) does not satisfy (3.4).

If f : X → R is differentiable, then f is convex if, and only if,


f (x ) ≥ f (x) + x − x, ∇f (x) for all x, x ∈ X.
J J J
(3.7)
In other words, the first order Taylor approximation lower bounds the convex
function universally (see Figure 3.4). Here and in the rest of the chapter
⟨ x, y⟩ denotes the Euclidean dot product between vectors x and y, that is,
Σ
⟨ x, y⟩ := xiyi. (3.8)
i

If f is twice differentiable, then f is convex if, and only if, its Hessian is
positive semi-definite, that is,
∇ 2f (x) ≥ 0. (3.9)
For twice differentiable strictly convex functions, the Hessian matrix is pos-
itive definite, that is, ∇ 2f (x) > 0. We briefly summarize some operations
which preserve convexity:
94 3 Optimization
Addition If f1 and f2 are convex, then f1 + f2 is also convex.
Scaling If f is convex, then αf is convex for α > 0.
Affine Transform If f is convex, then g(x) = f (Ax + b) for some matrix
A and vector b is also convex.
Adding a Linear Function ⟨ x for some vector
If f is convex, then g(x) = f (x)+ a,
a is also convex.
Subtracting a Linear Function If f is convex, then g(x) = f (x)−⟨⟩ a, x for some vector
a is also convex.
Pointwise Maximum If fi are convex, then g(x) = max ⟩ i fi(x) is also convex.
Scalar Composition If f (x) = h(g(x)), then f is convex if a) g is convex,
and h is convex, non-decreasing or b) g is concave, and
h is convex, non-increasing.

18 2
16
14
12 1
10
8 0
6
4
2 -1
0

3 -2

-3
-3 3 2 1 0 -1 -2 -3

3 -3

Fig. 3.3. Left: Convex Function in two variables. Right: the corresponding convex
below-sets {x|f (x) ≤ c}, for different values of c. This is also called a contour plot.

There is an intimate relation between convex functions and convex sets.


For instance, the following lemma show that the below sets (level sets) of
convex functions, sets for which f (x) ≤ c, are convex.

Lemma 3.4 (Below-Sets of Convex Functions) Denote by f : X → R


a convex function. Then the set

Xc := {x | x ∈ X and f (x) ≤ c}, for all c ∈ R, (3.10)

is convex.

Proof For any x, xJ ∈ Xc, we have f (x), f (xJ) ≤ c. Moreover, since f is


convex, we also have

f (λx + (1 − λ)x ) ≤ λf (x) + (1 − λ)f (x ) ≤ c for all 0 < λ < 1.


J J
(3.11)

Hence, for all 0 < λ < 1, we have (λx + (1 − λ)xJ) ∈ Xc, which proves the
claim. Figure 3.3 depicts this situation graphically.
3.1 Preliminaries 95

As we hinted in the introduction of this chapter, minimizing an arbitrary


function on a (possibly not even compact) set of arguments can be a difficult
task, and will most likely exhibit many local minima. In contrast, minimiza-
tion of a convex objective function on a convex set exhibits exactly one global
minimum. We now prove this property.

Theorem 3.5 (Minima on Convex Sets) If the convex function f : X →


R attains its minimum, then the set of x ∈ X, for which the minimum value
is attained, is a convex set. Moreover, if f is strictly convex, then this set
contains a single element.

Proof Denote by c the minimum of f on X. Then the set Xc := {x|x ∈


X and f (x) ≤ c} is clearly convex.
If f is strictly convex, then for any two distinct x, xJ ∈ Xc and any 0 <
λ < 1 we have
f (λx + (1 − λ)xJ) < λf (x) + (1 − λ)f (xJ) = λc + (1 − λ)c = c,
which contradicts the assumption that f attains its minimum on Xc. There-
fore Xc must contain only a single element.
As the following lemma shows, the minimum point can be characterized
precisely.

Lemma 3.6 Let f : X → R be a differentiable convex function. Then x is


a minimizer of f, if, and only if,
x − x, ∇f (x) ≥ 0 for all x .
J J
(3.12)

Proof To show the forward implication, suppose that x is the optimum


but (3.12) does not hold, that is, there exists an xJ for which
xJ − x, ∇f (x) < 0.
Consider the line segment z(λ) = (1 − λ)x + λxJ, with 0 < λ < 1. Since X
is convex, z(λ) lies in X. On the other hand,
d f (z(λ)). = xJ − x, ∇f (x) < 0,
dλ .
λ=0

which shows that for small values of λ we have f (z(λ)) < f (x), thus showing
that x is not optimal.
The reverse implication follows from (3.7) by noting that f (xJ) ≥ f (x),
whenever (3.12) holds.
96 3 Optimization

One way to ensure that (3.12) holds is to set ∇f (x) = 0. In other words,
minimizing a convex function is equivalent to finding a x such that ∇f (x) =
0. Therefore, the first order conditions are both necessary and sufficient
when minimizing a convex function.

3.1.3 Subgradients
So far, we worked with differentiable convex functions. The subgradient is a
generalization of gradients appropriate for convex functions, including those
which are not necessarily smooth.

Definition 3.7 (Subgradient) Suppose x is a point where a convex func-


tion f is finite. Then a subgradient is the normal vector of any tangential
supporting hyperplane of f at x. Formally µ is called a subgradient of f at
x if, and only if,

f (xJ) ≥ f (x) + xJ − x, µ for all xJ. (3.13)

The set of all subgradients at a point is called the subdifferential, and is de-
noted by ∂xf (x). If this set is not empty then f is said to be subdifferentiable
at x. On the other hand, if this set is a singleton then, the function is said
to be differentiable at x. In this case we use ∇f (x) to denote the gradient
of f . Convex functions are subdifferentiable everywhere in their domain. We now
state some simple rules of subgradient calculus:

Addition ∂x(f1(x) + f2(x)) = ∂xf1(x) + ∂xf2(x)


Scaling ∂xαf (x) = α∂xf (x), for α > 0
Affine Transform If g(x) = f (Ax + b) for some matrix A and vector b,
then ∂xg(x) = AT∂ yf (y).
Pointwise Maximum If g(x) = maxi fi (x) then ∂g(x) = conv(∂x fi' ) where
i′ ∈ argmaxi f i(x).

The definition of a subgradient can also be understood geometrically. As


illustrated by Figure 3.4, a differentiable convex function is always lower
bounded by its first order Taylor approximation. This concept can be ex-
tended to non-smooth functions via subgradients, as Figure 3.5 shows.
By using more involved concepts, the proof of Lemma 3.6 can be extended
to subgradients. In this case, minimizing a convex nonsmooth function en-
tails finding a x such that 0 ∈ ∂f (x).
3.1 Preliminaries 97

3.1.4 Strongly Convex Functions


When analyzing optimization algorithms, it is sometimes easier to work with
strongly convex functions, which generalize the definition of convexity.

Definition 3.8 (Strongly Convex Function) A convex function f is σ-


strongly convex if, and only if, there exists a constant σ > 0 such that the
function f (x) − σ2 ǁxǁ2 is convex.

The constant σ is called the modulus of strong convexity of f . If f is twice


differentiable, then there is an equivalent, and perhaps easier, definition of
strong convexity: f is strongly convex if there exists a σ such that

∇ 2f (x) ≥ σI. (3.14)

In other words, the smallest eigenvalue of the Hessian of f is uniformly lower


bounded by σ everywhere. Some important examples of strongly con- vex
functions include:

Example 3.1 (Squared Euclidean Norm) The function f (x) = λ


2
ǁxǁ2
is λ-strongly convex.

Σ
Example 3.2 (Negative Entropy) Let ∆n = {x s.t. i xi = 1 and xi ≥ 0}
be the n dimensional simplex, and f : ∆n → R be the negative entropy:
Σ
f (x) = xi log xi. (3.15)
i

Then f is 1-strongly convex with respect to the ǁ·ǁ1 norm on the simplex
(see Problem 3.7).

If f is a σ-strongly convex function then one can show the following prop-
erties (Exercise 3.8). Here x, xJ are arbitrary and µ ∈ ∂f (x) and µJ ∈ ∂f (xJ).
σ¨ J
x − x¨
2
f (xJ) ≥ f (x) + xJ − x, µ + (3.16)
2
1 ¨ J
µ − µ¨
2
f (x ) ≤ f (x) + x − x, µ +
J J
(3.17)

x − xJ, µ − µJ ≥ σ ¨x − xJ¨
2
(3.18)
1
x − x , µ − µ ≤ ¨µ − µ ¨ .
J J J 2
(3.19)
σ
98 3 Optimization

3.1.5 Convex Functions with Lipschitz Continous Gradient


A somewhat symmetric concept to strong convexity is the Lipschitz conti-
nuity of the gradient. As we will see later they are connected by Fenchel
duality.

Definition 3.9 (Lipschitz Continuous Gradient) A differentiable con-


vex function f is said to have a Lipschitz continuous gradient, if there exists
a constant L > 0, such that
¨∇f (x) − ∇f (xJ) ¨≤ L x¨ − xJ ¨ ∀x, xJ. (3.20)

As before, if f is twice differentiable, then there is an equivalent, and perhaps


easier, definition of Lipschitz continuity of the gradient: f has a Lipschitz
continuous gradient strongly convex if there exists a L such that

LI ≥ ∇ 2f (x). (3.21)

In other words, the largest eigenvalue of the Hessian of f is uniformly upper


bounded by L everywhere. If f has a Lipschitz continuous gradient with
modulus L, then one can show the following properties (Exercise 3.9).
L J 2
f (xJ) f (x) + xJ x, f (x) ¨ + ¨ ≤ x x − (3.22)
∇ − 2
J
f (x ) f (x) + x J
¨ + 1¨
x, f (x) ≥ f (x) − J)
f (x
2
(3.23)
∇J ∇ 2L− ∇
x − x , ∇f (x) − ∇f (x ) ≤ L x¨− x
J J 2¨
(3.24)
1 ¨
∇f (x) − ∇f (xJ)¨ .
2
x − xJ, ∇f (x) − ∇f (xJ) ≥ (3.25)
L

3.1.6 Fenchel Duality


The Fenchel conjugate of a function f is given by

f (x ) = sup {⟨ x, x ⟩ − f (x)} .
∗ ∗ ∗
(3.26)
x

Even if f is not convex, the Fechel conjugate which is written as a supremum


over linear functions is always convex. Some rules for computing Fenchel
duals are summarized in Table 3.1.6. If f is convex and its epigraph (3.3) is
a closed convex set, then f ∗∗ = f . If f and f ∗ are convex, then they satisfy the
so-called Fenchel-Young inequality
f (x) + f ∗(x∗) ≥ ⟨ x, x∗⟩ for all x, x∗. (3.27)
3.1 Preliminaries 99

Fig. 3.4. A convex function is always lower bounded by its first order Taylor ap-
proximation. This is true even if the function is not differentiable (see Figure 3.5)

1
4 3 2 1 0 1 2 3 4

Fig. 3.5. Geometric intuition of a subgradient. The nonsmooth convex function(solid


blue) is only subdifferentiable at the “kink” points. We illustrate two of its
subgradients (dashed green and red lines) at a “kink” point which are tangential to
the function. The normal vectors to these lines are subgradients. Observe that the
first order Taylor approximations obtained by using the subgradients lower bounds
the convex function.

This inequality becomes an equality whenever x ∈ ∂f (x), that is,


f (x) + f ∗(x∗) = ⟨ x, x∗⟩ for all x and x∗ ∈ ∂f (x). (3.28)

Strong convexity (Section 3.1.4) and Lipschitz continuity of the gradient


100 3 Optimization

Table 3.1. Rules for computing Fenchel Duals


Scalar Addition If g(x) = f (x) + α then g∗(x∗) = f ∗(x∗) − α.
Function Scaling If α > 0 and g(x) = αf (x) then g∗(x∗) = αf ∗(x∗/α).
Parameter Scaling If α = 0 and g(x) = f (αx) then g∗(x∗) = f ∗(x∗/α)
Linear Transformation If A is an invertible matrix then (f ◦ A)∗ = f ∗ ◦(A−1)∗.
Shift If g(x) = f (x —x0) then g (x ) = f ∗(x∗) + x∗⟨, x0 .
∗ ∗

Sum If g(x) = f1(x) + f2(x) then g∗(x ⟩ ∗) =


inf { f1∗ (x∗1 ) + f2∗ (x∗2 ) s.t. x∗1 + x∗2 = x∗ }.
Pointwise Infimum If g(x) = inf fi(x) then g∗(x∗) = supi fi∗(x∗).

(Section 3.1.5) are related by Fenchel duality according to the following


lemma, which we state without proof.

Lemma 3.10 (Theorem 4.2.1 and 4.2.2 [HUL93])


(i) If f is σ-strongly convex, then f ∗ has a Lipschitz continuous gradient
with modulus 1σ.
(ii) If f is convex and has a Lipschitz continuous gradient with modulus
L, then f ∗ is 1 -strongly
L
convex.
Next we describe some convex functions and their Fenchel conjugates.
Example 3.3 (Squared Euclidean Norm) Whenever f (x) = 1
ǁxǁ2 we
∗ ∗ 1 ∗ 2 2
have f (x ) = 2 ǁx ǁ , that is, the squared Euclidean norm is its own con-
jugate.

Example 3.4 (Negative Entropy) The Fenchel conjugate of the negative


entropy (3.15) is
∗ ∗
Σ ∗
f (x ) = log exp(xi ).
i

3.1.7 Bregman Divergence


Let f be a differentiable convex function. The Bregman divergence defined
by f is given by
∆f (x, xJ) = f (x) − f (xJ) − x − xJ, ∇f (xJ) . (3.29)
Also see Figure 3.6. Here are some well known examples.

Example 3.5 (Square Euclidean Norm) Set f (x) = 1


ǁxǁ2. Clearly,
2
∇f (x) = x and therefore
1 1 1
ǁxǁ2 − ¨xJ¨ − x − xJ, xJ = ¨x − xJ¨ .
2 2
f∆ (x, xJ) =
2 2 2
3.1 Preliminaries 101

f(x)

∆ f(x,x0 )

f( x 0 ) + x − x 0 , ∇f( x 0 )

0
f(x )

Fig. 3.6. f (x) is the value of the function at x, while f (x′)+⟨x x−′, f (x
∇ ′) denotesthe
first order Taylor expansion of f around x , evaluated at x. ⟩The difference

between these two quantities is the Bregman divergence, as illustrated.

Example 3.6 (Relative Entropy) Let f be the un-normalized entropy


Σ
f (x) = (xi log xi − xi) . (3.30)
i

One can calculate ∇f (x) = log x, where log x is the component wise loga- rithm
of the entries of x, and write the Bregman divergence
Σ Σ Σ J Σ J
∆f (x, xJ) = xi log xi − xi − x i log xJi + x i − x − xJ, log xJ
i i i i
Σ xi
= x ilog J
+ xJ i− x i .
xi
i

Example 3.7 (p-norm) Let f be the square p-norm


!2/p
1 1 Σ p
f (x) = ǁxǁ2p = xi . (3.31)
2 2
i
102 3 Optimization
1 1
We say that the q-norm is dual to the p-norm whenever + = 1. One can
p q
verify (Problem 3.12) that the i-th component of the gradient ∇f (x) is
p− 1
∇ xif (x) = sign(xi ) |xi | . (3.32)
ǁxǁp− 2
p

The corresponding Bregman divergence is


1 Σ sign(xJ ) |xJ |p− 1
J
∆f (x, x ) =
2 ¨ J¨2 J

2 ǁxǁp − 12 x − ǁxJiǁp−p 2i
i (xi − xi)
p
.

The following properties of the Bregman divergence immediately follow:

• ∆f (x, xJ) is convex in x.


• ∆f (x, xJ) ≥ 0.
• ∆f may not be symmetric, that is, in general ∆f (x, xJ) /= ∆f (xJ, x).
• ∇ x∆f (x, x ) = ∇f (x) − ∇f (x ).
J J

The next lemma establishes another important property.

Lemma 3.11 The Bregman divergence (3.29) defined by a differentiable


convex function f satisfies
∆f (x, y) + ∆f (y, z) − ∆f (x, z) = ⟨ ∇ f (z) − ∇f (y), x − y⟩ . (3.33)

Proof
∆f (x, y) + ∆f (y, z) = f (x) − f (y) − ⟨ x − y, ∇f (y)⟩ + f (y) − f (z) − ⟨ y − z, ∇f (z)⟩
= f (x) − f (z) − ⟨ x − y, ∇f (y)⟩ − ⟨ y − z, ∇f (z)⟩
= ∆f (x, z) + ⟨ ∇f (z) − ∇f (y), x − y⟩ .

3.2 Unconstrained Smooth Convex Minimization


In this section we will describe various methods to minimize a smooth convex
objective function.

3.2.1 Minimizing a One-Dimensional Convex Function


As a warm up let us consider the problem of minimizing a smooth one di-
mensional convex function J : R → R in the interval [L, U ]. This seemingly
3.2 Unconstrained Smooth Convex Minimization 103

Algorithm 3.1 Interval Bisection


1: Input: L, U , precision ϵ
2: Set t = 0, a0 = L and b0 = U
3: while (bt − a t) · J (U ) > ϵ do
J

4: if J J ( at+b
2 ) > 0 then
t

5: at+1 = at and bt+1 = at+b 2


t

6: else
7: at+1 = at+b 2
t
and bt+1 = bt

8: end if
9: t= t+1
10: end while
11: Return: at+b
2
t

simple problem has many applications. As we will see later, many optimiza-
tion methods find a direction of descent and minimize the objective function
along this direction1; this subroutine is called a line search. Algorithm 3.1
depicts a simple line search routine based on interval bisection.
Before we show that Algorithm 3.1 converges, let us first derive an im-
portant property of convex functions of one variable. For a differentiable one-
dimensional convex function J (3.7) reduces to

J(w) ≥ J(wJ) + (w − wJ) · JJ(wJ), (3.34)

where JJ(w) denotes the gradient of J. Exchanging the role of w and wJ in


(3.34), we can write

J(wJ) ≥ J(w) + (wJ − w) · JJ(w). (3.35)

Adding the above two equations yields

(w − wJ) · (JJ(w) − JJ(wJ)) ≥ 0. (3.36)

If w ≥ w , then this implies that J (w) ≥ J (w ). In other words, the gradient


J J J J

of a one dimensional convex function is monotonically non-decreasing.



Recall that minimizing a convex function is equivalent to finding w such
J ∗
that J (w ) = 0. Furthermore, it is easy to see that the interval bisection
maintains the invariant J J (at ) < 0 and J J (bt ) > 0. This along with the
monotonicity of the gradient suffices to ensure that w∗ ∈ (at, bt). Setting w
= w∗ in (3.34), and using the monotonicity of the gradient allows us to
1 If the objective function is convex, then the one dimensional function obtained by restricting
it along the search direction is also convex (Exercise 3.10).
104 3 Optimization

write for any w ∈ (at, bt)


J

J(w J ) − J(w ∗) ≤ (w J − w ∗ ) · J J (w J ) ≤ (bt − at ) · J J (U ). (3.37)


Since we halve the interval (at, bt) at every iteration, it follows that (bt−at) =
(U − L)/2t. Therefore
(U − L) · J J (U )
J(w J ) —J(w ∗) ≤ , (3.38)
2t
for all wJ ∈ (at, bt). In other words, to find an ϵ-accurate solution, that is, J (w J )
− J(w ∗) ≤ ϵ we only need log(U − L) + log J J (U ) + log(1/ϵ) < t itera- tions. An
algorithm which converges to an ϵ accurate solution in O(log(1/ϵ)) iterations
is said to be linearly convergent.
For multi-dimensional objective functions, one cannot rely on the mono-
tonicity property of the gradient. Therefore, one needs more sophisticated
optimization algorithms, some of which we now describe.

3.2.2 Coordinate Descent


Coordinate descent is conceptually the simplest algorithm for minimizing a
multidimensional smooth convex function J : Rn → R. At every iteration
select a coordinate, say i, and update
wt+1 = wt − ηtei. (3.39)
Here ei denotes the i-th basis vector, that is, a vector with one at the i-th co-
ordinate and zeros everywhere else, while ηt ∈ R is a non-negative scalar step
size. One could, for instance, minimize the one dimensional convex function
J(wt − ηei) to obtain the stepsize ηt. The coordinates can either be selected
cyclically, that is, 1, 2, . . . , n, 1, 2, . . . or greedily, that is, the coordinate which
yields the maximum reduction in function value.
Even though coordinate descent can be shown to converge if J has a Lip-
schitz continuous gradient [LT92], in practice it can be quite slow. However,
if a high precision solution is not required, as is the case in some machine
learning applications, coordinate descent is often used because a) the cost
per iteration is very low and b) the speed of convergence may be acceptable
especially if the variables are loosely coupled.

3.2.3 Gradient Descent


Gradient descent (also widely known as steepest descent) is an optimization
technique for minimizing multidimensional smooth convex objective func-
tions of the form J : Rn → R. The basic idea is as follows: Given a location
3.2 Unconstrained Smooth Convex Minimization 105

wt at iteration t, compute the gradient ∇J(wt), and update


wt+1 = wt − ηt∇J(wt), (3.40)
where ηt is a scalar stepsize. See Algorithm 3.2 for details. Different variants
of gradient descent depend on how ηt is chosen:

Exact Line Search: Since J(wt − η∇J(wt)) is a one dimensional convex


function in η, one can use the Algorithm 3.1 to compute:
ηt = argmin J(wt − η∇J(wt)). (3.41)
η

Instead of the simple bisecting line search more sophisticated line searches
such as the More-Thuente line search or the golden bisection rule can also be
used to speed up convergence (see [NW99] Chapter 3 for an extensive
discussion).

Inexact Line Search: Instead of minimizing J(wt − η∇J(w t)) we could simply
look for a stepsize which results in sufficient decrease in the objectivefunction
value. One popular set of sufficient decrease conditions is the Wolfe
conditions
J(wt+1 ) ≤ J(wt) + c1ηt ⟨ ∇J(wt), wt+1 − wt⟩ (sufficient decrease) (3.42)
⟨ ∇J(wt+1), wt+1 − wt⟩ ≥ c2 ⟨ ∇J(wt), wt+1 − wt⟩ (curvature) (3.43)
with 0 < c1 < c2 < 1 (see Figure 3.7). The Wolfe conditions are also called
the Armijio-Goldstein conditions. If only sufficient decrease (3.42) alone is
enforced, then it is called the Armijio rule.

Fig. 3.7. The sufficient decrease condition (left) places an upper bound on the
acceptable stepsizes while the curvature condition (right) places a lower bound on
the acceptable stepsizes.
106 3 Optimization

Algorithm 3.2 Gradient Descent


1: Input: Initial point w0, gradient norm tolerance ϵ
2: Set t = 0
3: while ǁ∇J(wt)ǁ ≥ ϵ do
4: wt+1 = wt − ηt∇J(wt)
5: t = t + 1
6: end while
7: Return: w t

Decaying Stepsize: Instead of performing a line search at every itera- tion,


one can use a stepsize
√ which decays according to a fixed schedule, for
example, ηt = 1/ t. In Section 3.2.4 we will discuss the decay schedule and
convergence rates of a generalized version of gradient descent.

Fixed Stepsize: Suppose J has a Lipschitz continuous gradient with mod-


ulus L. Using (3.22) and the gradient descent update wt+1 = wt − ηt∇J(wt) one
can write
L
J(wt+1) ≤ J(wt) + ⟨ ∇J(wt), wt+1 − wt⟩ + ǁwt+1 − wtǁ (3.44)
2
L
= J(wt) − ηt ǁ∇J(wt)ǁ2 η2t ǁ∇J(wt)ǁ 2 . (3.45)
+
2

Minimizing (3.45) as a function of ηt clearly shows that the upper bound on


J(wt+1) is minimized when we set ηt = 1 ,Lwhich is the fixed stepsize rule.

Theorem 3.12 Suppose J has a Lipschitz continuous gradient with modu-


lus L. Then Algorithm 3.2 with a fixed stepsize ηt = L1 will return a solution
wt with ǁ∇J(wt)ǁ ≤ ϵ in at most O(1/ϵ2) iterations.

1
Proof Plugging in ηt = L and rearranging (3.45) obtains
1 2

ǁ∇J(wt)ǁ ≤ J(wt) − J(wt+1) (3.46)


2L
Summing this inequality
T
1 Σ
ǁ∇J(w t)ǁ2 ≤ J(w 0) − J(w T ) ≤ J(w 0) − J(w ∗),
2L t=0

which clearly shows that ǁ∇J(wt)ǁ → 0 as t → ∞. Furthermore, we can


write the following simple inequality:
r
2L(J (w 0 ) − J (w ∗))
ǁ∇J(wT )ǁ ≤ .
T +1
3.2 Unconstrained Smooth Convex Minimization 107

Solving for
r
2L(J (w 0 ) − J (w ∗))

T +1
shows that T is O(1/ϵ2) as claimed.
If in addition to having a Lipschitz continuous gradient, if J is σ-strongly
convex, then more can be said. First, one can translate convergence in
ǁ∇J(w t)ǁ to convergence in function values. Towards this end, use (3.17) to
write
1
J(wt ) ≤ J(w ∗) + ǁ∇J (w t)ǁ2 .

Therefore, it follows that whenever ǁ∇J (w t )ǁ < ϵ we have J(wt ) − J(w ∗) <
ϵ2/2σ. Furthermore, we can strengthen the rates of convergence.

Theorem 3.13 Assume everything as in Theorem 3.12. Moreover assume


that J is σ-strongly convex, and let c := 1 − σL. Then J (w t ) − J (w ∗) ≤ ϵ
after at most
log((J(w0 ) − J(w ))/ϵ)

(3.47)
log(1/c)
iterations.

Proof Combining (3.46) with ǁ∇ J(wt )ǁ2 ≥ 2σ(J(wt ) − J(w ∗)), and using
the definition of c one can write
c(J (w t ) − J(w ∗)) ≥ J(wt+1 ) − J(w ∗).
Applying the above equation recursively
cT (J(w 0 ) − J(w ∗)) ≥ J(wT ) − J (w ∗).
Solving for
ϵ = cT (J (w 0 ) − J (w ∗))
and rearranging yields (3.47).

When applied to practical problems which are not strongly convex gra- dient
descent yields a low accuracy solution within a few iterations. How- ever, as
the iterations progress the method “stalls” and no further increasein accuracy
is obtained because of the O(1/ϵ2) rates of convergence. On the other hand,
if the function is strongly convex, then gradient descent converges linearly, that
is, in O(log(1/ϵ)) iterations. However, the number
108 3 Optimization

of iterations depends inversely on log(1/c). If we approximate log(1/c) =


— log(1 − σ/L) ≈ σ/L, then it shows that convergence depends on the ratio
L/σ. This ratio is called the condition number of a problem. If the problem
is well conditioned, i.e., σ ≈ L then gradient descent converges extremely
fast. In contrast, if σ L then gradient descent requires many iterations.
This is best illustrated with an example: Consider the quadratic objective
function

J(w) = 1 w T Aw − bw, (3.48)


2

where A ∈ Rn× n is a symmetric positive definite matrix, and b ∈ Rn is any


arbitrary vector.
Recall that a twice differentiable function is σ-strongly convex and has a
Lipschitz continuous gradient with modulus L if and only if its Hessian sat-
isfies LI ≥ ∇ 2J(w) ≥ σI (see (3.14) and (3.21)). In the case of the quadratic
function (3.48) ∇ 2J(w) = A and hence σ = λmin and L = λmax, where λmin
(respectively λmax) denotes the minimum (respectively maximum) eigen-
value of A. One can thus change the condition number of the problem by
varying the eigen-spectrum of the matrix A. For instance, if we set A to
the n × n identity matrix, then λmax = λmin = 1 and hence the problem is
well conditioned. In this case, gradient descent converges very quickly to the
optimal solution. We illustrate this behavior on a two dimensional quadratic
function in Figure 3.8 (right).
On the other hand, if we choose A such that λmax λmin then the problem
(3.48) becomes ill-conditioned. In this case gradient descent exhibits
zigzagging and slow convergence as can be seen in Figure 3.8 (left). Because
of these shortcomings, gradient descent is not widely used in practice. A
number of different algorithms we described below can be understood as
explicitly or implicitly changing the condition number of the problem to
accelerate convergence.

3.2.4 Mirror Descent


One way to motivate gradient descent is to use the following quadratic ap-
proximation of the objective function
1
Qt (w) := J(w t ) + ⟨ ∇J (wt ), w − wt ⟩ + (w − w t)T (w − w ),
t (3.49)
2
where, as in the previous section, ∇J(·) denotes the gradient of J. Mini-
mizing this quadratic model at every iteration entails taking gradients with
3.2 Unconstrained Smooth Convex Minimization 109

Fig. 3.8. Convergence of gradient descent with exact line search on two quadratic
problems (3.48). The problem on the left is ill-conditioned, whereas the problem
on the right is well-conditioned. We plot the contours of the objective function,
and the steps taken by gradient descent. As can be seen gradient descent converges
fast on the well conditioned problem, while it zigzags and takes many iterations to
converge on the ill-conditioned problem.

respect to w and setting it to zero, which gives


w − wt := −∇J(wt). (3.50)
Performing a line search along the direction −∇J(wt) recovers the familiar
gradient descent update
wt+1 = wt − ηt∇J(wt). (3.51)
The closely related mirror descent method replaces the quadratic penalty
in (3.49) by a Bregman divergence defined by some convex function f to yield
Qt(w) := J(wt) + ⟨ ∇J(wt), w − wt⟩ + ∆f (w, wt). (3.52)
Computing the gradient, setting it to zero, and using ∇ w∆f (w, wt) = ∇f (w)−
∇f (wt), the minimizer of the above model can be written as
∇f (w) − ∇f (wt) = −∇J(wt). (3.53)
As before, by using a stepsize ηt the resulting updates can be written as
wt+1 = ∇f − 1(∇f (wt) − ηt∇J(wt)). (3.54)

It is easy to verify that choosing f (·) = 12 ǁ·ǁ2 recovers the usual gradient
descent updates. On the other hand if we choose f to be the un-normalized
entropy (3.30) then ∇f (·) = log and therefore (3.54) specializes to
wt+1 = exp(log(w t ) − ηt ∇J (w t )) = wt exp(−ηt ∇ J(wt )), (3.55)
which is sometimes called the Exponentiated Gradient (EG) update.
110 3 Optimization

Theorem 3.14 Let J be a convex function and J(w ) denote its minimum
value. The mirror descent updates (3.54) with a σ-strongly convex function
f satisfy
Σ
∆f (w , w1) + 12σ t ηt 2 ǁ∇J(wt)ǁ2

Σ ≥ min J(wt) − J(w ∗ ).


t ηt t

Proof Using the convexity of J (see (3.7)) and (3.54) we can write
J(w ∗) ≥ J (w t ) + ⟨ w ∗ − w t , ∇J (w t )⟩
1
≥ J(w t) — w∗ − wt, f (w t+1 ) − f (w t)⟩ .
ηt

Now applying Lemma 3.11 and rearranging


∆f (w , wt ) − ∆f (w , w t+1 ) + ∆f (w t , w t+1 ) ≥ ηt (J(w t ) − J(w )).
∗ ∗ ∗

Summing over t = 1, . . . , T
Σ Σ
∆f (w , w1 ) − ∆f (w , w T +1 ) + ∆f (w t , w t+1 ) ≥ ηt (J(wt ) − J(w )).
∗ ∗ ∗

t t

Noting that ∆f (w , wT +1 ) ≥ 0, J (w t ) − J (w ) ≥ mint J (w t ) − J (w ∗), and rearranging


∗ ∗

it follows that
Σ
∆f (w∗, w1) + t ∆f (wt, wt+1) ≥ min J (w t ) − J (w ∗ ). (3.56)
Σ
ηtt t

Using (3.17) and (3.54)

1 1
∆ f(w t, w t+1 ) ≤ ǁ∇f (w t) − ∇f (w )ǁ2 = η2 ǁ∇J(w )ǁ2 . (3.57)

t+1 t
2σ t
The proof is completed by plugging in (3.57) into (3.56).

Corollary 3.15 If J has a Lipschitz continuous gradient with modulus L,


and the stepsizes ηt are chosen as

∗ 1
ηt = 2σ∆f (w , w1) √ then (3.58)
L t
r
2∆f (w∗, w1) 1
min J(w t ) − J(w ) ≤ L ∗
√ .
1≤t≤T σ T

Proof Since ∇J is Lipschitz continuous


∗ Σ
∆f (w , w1) +2σ1 t ηt
2L 2
min J(wt) − J(w ) ≤ ∗Σ .
1≤t≤T t ηt
3.2 Unconstrained Smooth Convex Minimization 111

Plugging in (3.58) and using Problem 3.15


r Σ r
∆f (w ∗ , w1 ) (1 + t 1t) ∆f (w∗, w1) 1
min J(w t ) − J(w ) ≤ L

Σ 1 ≤L √.
1≤t≤T 2σ √
t t
2σ T

3.2.5 Conjugate Gradient


Let us revisit the problem of minimizing the quadratic objective function
(3.48). Since ∇J(w) = Aw − b, at the optimum ∇J(w) = 0 (see Lemma 3.6)
and hence

Aw = b. (3.59)

In fact, the Conjugate Gradient (CG) algorithm was first developed as a


method to solve the above linear system.
As we already saw, updating w along the negative gradient direction may
lead to zigzagging. Therefore CG uses the so-called conjugate directions.

Definition 3.16 (Conjugate Directions) Non zero vectors pt and p t′ are said
to be conjugate with respect to a symmetric positive definite matrix Aif pTt′
Apt = 0 if t /= t .
J

Conjugate directions {p0, . . . , pn− 1} are linearly independent and form abasis.
To see this, suppose the pt’s Σ are not linearly independent. Then there exists
non-zero coefficients σ sucht that σ p = 0. The t t pt ’s are conjugate
Σ Σ
directions, therefore pT
t′ A( t σ t p t ) = t σ t p T
t′ Ap t = σ t′ pT J
t′ Apt′ = 0 for all t .
J
Since A is positive definite this implies that σ t′ = 0 for all t , a contradiction.
As it turns out, the conjugate directions can be generated iteratively as
follows: Starting with any w0 ∈ Rn define p0 = −g0 = b − Aw0, and set
gtTpt
αt = − (3.60a)
pT
t Apt

wt+1 = wt + αtpt (3.60b)


gt+1 = Awt+1 − b (3.60c)
gTT Apt
βt+1 = pt+1Ap (3.60d)

t t
pt+1 = −gt+1 + βt+1pt (3.60e)
112 3 Optimization

The following theorem asserts that the pt generated by the above procedure
are indeed conjugate directions.

Theorem 3.17 Suppose the t-th iterate generated by the conjugate gradient
method (3.60) is not the solution of (3.59), then the following properties hold:

span{g0, g1, . . . , g t} = span{g0, Ag0, . . . , Atg0}. (3.61)


span{p0, p 1, . . . , pt} = span{g 0, Ag0, . . . , A g0}.
t
(3.62)
T
pj gt = 0 for all j < t (3.63)
T
pj Apt = 0 for all j < t. (3.64)

Proof The proof is by induction. The induction hypothesis holds trivially at


t = 0. Assuming that (3.61) to (3.64) hold for some t, we prove that they
continue to hold for t + 1.

Step 1: We first prove that (3.63) holds. Using (3.60c), (3.60b) and (3.60a)

pT T
j g t+1 = pj (Aw t+1 − b)

j (Awt + αt pt − b)
= pT
gtTpt
= pT
j Awt — Apt — b
pT
t Apt

T
pj Apt
=p g −
T T
g pt .
j t T t
pt Apt
For j = t, both terms cancel out, while for j < t both terms vanish due to
the induction hypothesis.

Step 2: Next we prove that (3.61) holds. Using (3.60c) and (3.60b)

gt+1 = Awt+1 − b = Awt + αtApt − b = gt + αtApt.

By our induction hypothesis, gt ∈ span{g0, Ag0, . . . , Atg0}, while Apt ∈ span{Ag0,


A2g0, . . . , At+1g0}. Combining the two we conclude that gt+1 ∈ span{g0, Ag0, . . .
, At+1g0}. On the other hand, we already showed that gt+1 is orthogonal to
{p0 , p1 , . . . , pt }. Therefore, gt+1 ∈
/ span{p0 , p1 , . . . , pt }. Thus our induction
assumption implies that gt+1 ∈ / span{g0 , Ag 0 , . . . , At g0 }. This allows us to
conclude that span{g0, g1, . . . , gt+1} = span{g0, Ag0, . . . , At+1g0}.
3.2 Unconstrained Smooth Convex Minimization 113

Step 3 We now prove (3.64) holds. Using (3.60e)

pT T T
t+1 Apj = −g t+1 Apj + βt+1 pt Apj .

By the definition of βt+1 (3.60d) the above expression vanishes for j = t. For
j < t, the first term is zero because Apj ∈ span{p0 , p1 , . . . , pj +1 }, a subspace
orthogonal to gt+1 as already shown in Step 1. The induction hypothesis
guarantees that the second term is zero.

Step 4 Clearly, (3.61) and (3.60e) imply (3.62). This concludes the proof.

A practical implementation of (3.60) requires two more observations:


First, using (3.60e) and (3.63)

−gtT pt = gtT gt − βt gtT pt− 1 = gtT gt .


Therefore (3.60a) simplifies to
gtTgt
αt = T . (3.65)
pt Apt

Second, using (3.60c) and (3.60b)


gt+1 − gt = A(wt+1 − wt) = αtApt.

But gt ∈ span{p0, . . . , pt}, a subspace orthogonal to gt+1 by (3.63). Therefore


gtT+1Ap t = α1t (gtT+1 g t+1 ). Substituting this back into (3.60d) and using (3.65)
yields
gtT+1gt+1
βt+1 = . (3.66)
gT g

t t

We summarize the CG algorithm in Algorithm 3.3. Unlike gradient descent


whose convergence rates for minimizing the quadratic objective function
(3.48) depend upon the condition number of A, as the following theorem
shows, the CG iterates converge in at most n steps.

Theorem 3.18 The CG iterates (3.60) converge to the minimizer of (3.48)


after at most n steps.

Proof Let w denote the minimizer of (3.48). Since the pt’s form a basis
w − w0 = σ0p0 + . . . + σn− 1pn− 1,

for some scalars σt. Our proof strategy will be to show that the coefficients
114 3 Optimization
Algorithm 3.3 Conjugate Gradient
1: Input: Initial point w0, residual norm tolerance ϵ
2: Set t = 0, g0 = Aw0 − b, and p0 = −g0
3: while ǁAwt − bǁ ≥ ϵ do
T
4: αt = pgT t gt
t Apt

5: wt+1 = wt + αtp t
6: gt+1 = gtT+ αtApt
g g
7: βt+1 = t+1T t+1
gt g t
8: pt+1 = −g t+1 + βt+1p t
9: t= t+1
10: end while
11: Return: wt

σt coincide with αt defined in (3.60a). Towards this end premultiply with


T
pt A and use conjugacy to obtain
pT A(w − w 0 )
σt = t T . (3.67)
p Ap
t t

On the other hand, following the iterative process (3.60b) from w0 until wt
yields
wt − w0 = α0p0 + . . . + αt− 1pt− 1.
T
Again premultiplying with pt A and using conjugacy

t A(wt − w 0 ) = 0.
pT (3.68)
Substituting (3.68) into (3.67) produces

t A(w − w t )
pT gtT pt , (3.69)
σ = =−
t
pT
t Apt pT
t Apt
thus showing that σt = αt.

Observe that the g t+1 computed via (3.60c) is nothing but the gradient of
J(wt+1). Furthermore, consider the following one dimensional optimization
problem:
min φt(α) := J(wt + αpt).
α∈ R

Differentiating φt with respect to α

t (Aw t + αApt − b) = pt (gt + αApt ).


φJt (α) = pT T
3.2 Unconstrained Smooth Convex Minimization 115
T
g t pt
The gradient vanishes if we set α = − , which recovers (3.60a). In other
pT
t Apt

words, every iteration of CG minimizes J(w) along a conjugate direction pt.


Contrast this with gradient descent which minimizes J(w) along the negative
gradient direction g t at every iteration.
It is natural to ask if this idea of generating conjugate directions and
minimizing the objective function along these directions can be applied to
general convex functions. The main difficulty here is that Theorems 3.17 and
3.18 do not hold. In spite of this, extensions of CG are effective even in this
setting. Basically the update rules for gt and pt remain the same, but the
parameters αt and βt are computed differently. Table 3.2 gives an overview
of different extensions. See [NW99, Lue84] for details.

Table 3.2. Non-Quadratic modifications of Conjugate Gradient Descent


Generic Method Compute Hessian Kt := ∇2J(wt) and update αt
and βt with
gtT pt g T Kt pt
αt = − p T
t Kt pt
and βt = − pt+1
TK p
t t t

gtT+1 gt+1
Fletcher-Reeves Set αt = argminα J(wt + αpt) and βt gtT gt
= .

Polak-Ribière Set αt = argminα J(w t + αpt ), yt = gt+1 − gt , and


T
βt = yt Tgt+1 .
gt g t
In practice, Polak-Ribière tends to be better than
Fletcher-Reeves.
Hestenes-Stiefel Set αt = argminα J(wt + αpt), yt = gt+1 − gt, and
T
βt = yt Tgt+1 .
yt p t

3.2.6 Higher Order Methods


Recall the motivation for gradient descent as the minimizer of the quadratic
model
1
Q t (w) := J(w t) + ⟨ ∇J(w t), w − w t⟩ + (w — w )T (w − w ),
2
t t

The quadratic penalty in the above equation uniformly penalizes deviation


from wt in different dimensions. When the function is ill-conditioned one
would intuitively want to penalize deviations in different directions differ-
ently. One way to achieve this is by using the Hessian, which results in the
116 3 Optimization
Algorithm 3.4 Newton’s Method
1: Input: Initial point w0, gradient norm tolerance ϵ
2: Set t = 0
3: while ǁ∇J(wt )ǁ > ϵ do
4: Compute pt := −∇ 2J(wt)− 1∇J(wt)
5: Compute ηt = argminη J(wt + ηp t) e.g., via Algorithm 3.1.
6: wt+1 = wt + ηtp t
7: t= t+1
8: end while
9: Return: w t

following second order Taylor approximation:


2
Q (w) := J(w ) + ⟨ ∇J(w ), w − w ⟩ 1 — w ) ∇ J (w )(w − w ).
T

t t t t + (w t t t
2
(3.70)

Of course, this requires that J be twice differentiable. We will also assume


that J is strictly convex and hence its Hessian is positive definite and in-
vertible. Minimizing Qt by taking gradients with respect to w and setting it
zero obtains
w − wt := −∇ 2J(wt) 1∇J(wt),

(3.71)
Since we are only minimizing a model of the objective function, we perform
a line search along the descent direction (3.71) to compute the stepsize ηt,
which yields the next iterate:
wt+1 = wt − ηt∇ 2J(wt)− 1∇J(wt). (3.72)
Details can be found in Algorithm 3.4.
Suppose w∗ denotes the minimum of J(w). We say that an algorithm
exhibits quadratic convergence if the sequences of iterates {wk} generated
by the algorithm satisfies:
ǁwk+1 − w∗ǁ ≤ C ǁwk − w∗ǁ2 (3.73)
for some constant C > 0. We now show that Newton’s method exhibits
quadratic convergence close to the optimum.

Theorem 3.19 (Quadratic convergence of Newton’s Method) Suppose


J is twice differentiable, strongly convex, and the Hessian of J is bounded
and Lipschitz continuous with modulus M in a neighborhood of the so-
lution w∗. Furthermore, assume that ¨ ∇ 2J(w)− 1¨ ≤ N. The iterations
3.2 Unconstrained Smooth Convex Minimization 117

wt+1 = wt − ∇ 2J(wt) ∇J(wt) converge quadratically to w , the minimizer


−1 ∗

of J.
Proof First notice that
∫ 1
∗ 2 ∗ ∗
∇J(wt) − ∇J(w ) = ∇ J(wt + t(w — wt))(wt − w )dt. (3.74)
0

Next using the fact that ∇ 2J(w t) is invertible and the gradient vanishes at
the optimum (∇ J(w ∗) = 0), write
wt+1 − w∗ = wt − w∗ − ∇ 2J(wt)− 1∇J(wt)
= ∇ 2 J(w t )− 1 [∇ 2 J(w t )(wt − w ∗) − (∇J (w t ) − ∇ J(w ∗))]. (3.75)
Using (3.75), (3.74), and the Lipschitz continuity of ∇ 2J
¨∇J (w t ) − ∇ J(w ∗) − ∇ 2 J(w t )(w t − w ∗)¨
∫ 1
¨ 2
= ¨ [∇ J(wt + t(wt − w ∗ )) − ∇ J(wt)](wt − w )dt¨
2 ∗

∫ 1 ǁ(wt − w )ǁ dt
≤ ¨[∇ 2 J(w + t(w − w ∗ 2 ¨ ∗
0
t t )) − ∇ J(wt)]

Mt dt = M 2
ǁwt − w ǁ .
2 1 ∗
≤ ǁwt − w ǁ ∗
2
(3.76)
0
Finally use (3.75) and (3.76) to conclude that
M ¨ 2 NM
ǁwt+1 − w∗ǁ ≤ ∇ J(w )t− 1¨ ǁw t — w∗ǁ2 ≤ ǁw t — w∗ǁ2.
2 2

Newton’s method as we described it suffers from two major problems.


First, it applies only to twice differentiable, strictly convex functions. Sec-
ond, it involves computing and inverting of the n × n Hessian matrix at
every iteration, thus making it computationally very expensive. Although
Newton’s method can be extended to deal with positive semi-definite Hes-
sian matrices, the computational burden often makes it unsuitable for large
scale applications. In such cases one resorts to Quasi-Newton methods.

3.2.6.1 Quasi-Newton Methods


Unlike Newton’s method, which computes the Hessian of the objective func-
tion at every iteration, quasi-Newton methods never compute the Hessian;
they approximate it from past gradients. Since they do not require the ob-
jective function to be twice differentiable, quasi-Newton methods are much
118 3 Optimization

1200

1000

800

600

400

200

200

400
6 4 2 0 2 4 6

Fig. 3.9. The blue solid line depicts the one dimensional convex function J(w) = w4
+ 20w2 + w. The green dotted-dashed line represents the first order Taylor
approximation to J(w), while the red dashed line represents the second order Taylor
approximation, both evaluated at w = 2.

more widely applicable. They are widely regarded as the workhorses of


smooth nonlinear optimization due to their combination of computational ef-
ficiency and good asymptotic convergence. The most popular quasi-Newton
algorithm is BFGS, named after its discoverers Broyde, Fletcher, Goldfarb, and
Shanno. In this section we will describe BFGS and its limited memory
counterpart LBFGS.
Suppose we are given a smooth (not necessarily strictly) convex objective
function J : Rn → R and a current iterate wt ∈ Rn. Just like Newton’s
method, BFGS forms a local quadratic model of the objective function, J:
1
Q t (w) := J(w )t + ⟨ ∇J(w ),t w − w t⟩ + (w — w )TH (w − w ). (3.77)
2
t t t

Unlike Newton’s method which uses the Hessian to build its quadratic model
(3.70), BFGS uses the matrix Ht > 0, which is a positive-definite estimate
of the Hessian. A quasi-Newton direction of descent is found by minimizing
Qt(w):

w − wt = −Ht− 1 ∇ J(w t ). (3.78)

The stepsize ηt > 0 is found by a line search obeying the Wolfe conditions
3.2 Unconstrained Smooth Convex Minimization 119

(3.42) and (3.43). The final update is given by


wt+1 = wt − ηt Ht− 1 ∇ J(wt ). (3.79)
Given wt+1 we need to update our quadratic model (3.77) to
1
t+1 ), w − w t+1 ⟩ + (w − w t+1 ) H t+1 (w − w
T
Qt+1 (w) := J(w t+1 ) + t+1).
⟨ ∇J(w 2

(3.80)
When updating our model it is reasonable to expect that the gradient of
Qt+1 should match the gradient of J at wt and wt+1. Clearly,
∇Qt+1(w) = ∇J(wt+1) + Ht+1(w − wt+1), (3.81)
which implies that ∇Qt+1(wt+1 ) = ∇J(wt+1), and hence our second con-
dition is automatically satisfied. In order to satisfy our first condition, we
require
∇Qt+1(wt) = ∇J(wt+1 ) + Ht+1(wt − wt+1) = ∇J(wt). (3.82)
By rearranging, we obtain the so-called secant equation:
Ht+1st = yt, (3.83)
where st := wt+1 − wt and yt := ∇J(wt+1) − ∇J(wt) denote the most recent
step along the optimization trajectory in parameter and gradient space,
respectively. Since Ht+1 is a positive definite matrix, pre-multiplying the
secant equation by st yields the curvature condition
sT
t yt > 0. (3.84)
If the curvature condition is satisfied, then there are an infinite number
of matrices Ht+1 which satisfy the secant equation (the secant equation
represents n linear equations, but the symmetric matrix Ht+1 has n(n+1)/2
degrees of freedom). To resolve this issue we choose the closest matrix to
Ht which satisfies the secant equation. The key insight of the BFGS comes
from the observation that the descent direction computation (3.78) involves
the inverse matrix Bt := Ht− 1. Therefore, we choose a matrix Bt+1 := H−t+11

such that it is close to Bt and also satisfies the secant equation:


minǁB − Btǁ (3.85)
B
s. t. B = BT and Byt = st. (3.86)
If the matrix norm ǁ·ǁ is appropriately chosen [NW99], then it can be shown
that
Bt+1 = (1 −ρt st ytT )Bt (1 −ρt yt stT ) + ρt st sT
t , (3.87)
120 3 Optimization
Algorithm 3.5 LBFGS
1: Input: Initial point w0 , gradient norm tolerance ϵ > 0
2: Set t = 0 and B0 = I
3: while ǁ∇J(wt )ǁ > ϵ do
4: pt = −Bt ∇J (w t )
5: Find ηt that obeys (3.42) and (3.43)
6: st = ηtpt
7: wt+1 = wt + st
8: yt := ∇J(wt+1) − ∇J(wt)
T
st yt
9: if t = 0 : B t := ytT yt
I
10: ρt = (sT
t yt )
−1

11: Bt+1 = (I − ρt st ytT )Bt (I − ρt yt sT T


t ) + ρt s t s t
12: t=t+1
13: end while
14: Return: wt

where ρt := (ytTst) 1. In other words, the matrix Bt is modified via an


incremental rank-two update, which is very efficient to compute, to obtain


Bt+1.
There exists an interesting connection between the BFGS update (3.87)
and the Hestenes-Stiefel variant of Conjugate gradient. To see this assume
that an exact line search was used to compute wt+1 , and therefore sT
t ∇ J(wt+1 ) =
0. Furthermore, assume that Bt = 1, and use (3.87) to write
ytT ∇J (w t+1 )
p = −B ∇J(w ) = −∇J(w ) + s, (3.88)
t
t+1 t+1 t+1 t+1
ytTst
which recovers the Hestenes-Stiefel update (see (3.60e) and Table 3.2).
Limited-memory BFGS (LBFGS) is a variant of BFGS designed for solv- ing
large-scale optimization problems where the O(d2) cost of storing and
updating Bt would be prohibitively expensive. LBFGS approximates the
quasi-Newton direction (3.78) directly from the last m pairs of st and yt via
a matrix-free approach. This reduces the cost to O(md) space and time per
iteration, with m freely chosen. Details can be found in Algorithm 3.5.

3.2.6.2 Spectral Gradient Methods


Although spectral gradient methods do not use the Hessian explicitly, they
are motivated by arguments very reminiscent of the Quasi-Newton methods.
Recall the update rule (3.79) and secant equation (3.83). Suppose we want
3.2 Unconstrained Smooth Convex Minimization 121

a very simple matrix which approximates the Hessian. Specifically, we want

Ht+1 = αt+1I (3.89)

where αt+1 is a scalar and I denotes the identity matrix. Then the secant
equation (3.83) becomes
αt+1st = yt. (3.90)

In general, the above equation cannot be solved. Therefore we use the αt+1
which minimizes ǁαt+1st − ytǁ2 which yields the Barzilai-Borwein (BB) step-
size
sT
t yt
αt+1 = . (3.91)
sTs

t t

As it turns out, αt+1 lies between the minimum and maximum eigenvalue of
the average Hessian in the direction st, hence the name Spectral Gradient
method. The parameter update (3.79) is now given by
1
wt+1 = wt − ∇J(wt). (3.92)
αt
A practical implementation uses safeguards to ensure that the stepsize αt+1
is neither too small nor too large. Given 0 < αmin < αmax < ∞ we compute
sT
t yt
αt+1 = min α max , max α min , T . (3.93)
s s

t t

One of the peculiar features of spectral gradient methods is their use


of a non-monotone line search. In all the algorithms we have seen so far,
the stepsize is chosen such that the objective function J decreases at every
iteration. In contrast, non-monotone line searches employ a parameter M ≥
1 and ensure that the objective function decreases in every M iterations. Of
course, setting M = 1 results in the usual monotone line search. Details can
be found in Algorithm 3.6.

3.2.7 Bundle Methods


The methods we discussed above are applicable for minimizing smooth, con-
vex objective functions. Some regularized risk minimization problems involve
a non-smooth objective function. In such cases, one needs to use bundle
methods. In order to lay the ground for bundle methods we first describe
their precursor the cutting plane method [Kel60]. Cutting plane method is
based on a simple observation: A convex function is bounded from below by
122 3 Optimization
Algorithm 3.6 Spectral Gradient Method
1: Input: w0, M ≥ 1, αmax > αmin > 0, γ ∈ (0, 1), 1 > σ2 > σ1 > 0,
α0 ∈ [αmin, αmax], and ϵ > 0
2: Initialize: t = 0
3: while ǁ∇J(wt )ǁ > ϵ do
4: λ=1
5: while TRUE do
6: dt = − α1t ∇J(wt)
7: w+ = wt + λdt
8: δ = ⟨ dt, ∇J(wt)⟩
9: if J(w+ ) ≤ min0≤j≤min(t,M − 1) J (xt− j ) + γλδ then
10: wt+1 = w+
11: st = wt+1 − wt
12: yt = ∇J(wt+1) − ∇J(wt)
13: break
14: else
15: λtmp = − 12λ2 δ/(J(w+ ) − J(wt ) − λδ)
16: if λtmp > σ1 and λtmp < σ2λ then
17: λ = λtmp
18: else
19: λ = λ/2
20: end if
21: end if
22: end while
T
23: αt+1 = min(αmax , max(αmin , stTyt ))
st s t
24: t=t+1
25: end while
26: Return: wt

its linearization (i.e., first order Taylor approximation). See Figures 3.4 and
3.5 for geometric intuition, and recall (3.7) and (3.13):

J(w) ≥ J(w ) + w − w , s ∀w and s ∈ ∂J(w ).


J J J J J
(3.94)

Given subgradients s1, s2, . . . , st evaluated at locations w0, w1, . . . , wt− 1, we


can construct a tighter (piecewise linear) lower bound for J as follows (also
see Figure 3.10):
J(w) ≥ JCP(w) := max {J(wi− 1) + ⟨ w − wi− 1, si⟩ }. (3.95)
t
1≤i≤t
3.2 Unconstrained Smooth Convex Minimization 123
Given iterates {w i }t− 1 , the cutting plane method minimizes J CP to obtain
i=0 t
the next iterate wt:
wt := argmin JtCP(w). (3.96)
w

This iteratively refines the piecewise linear lower bound JCP and allows us
to get close to the minimum of J (see Figure 3.10 for an illustration).
If w ∗ denotes the minimizer of J , then clearly each J (w i ) ≥ J (w ∗ ) and
hence min0≤i≤t J (w i ) ≥ J (w ∗). On the other hand, since J ≥ J CPt it fol-
lows that J(w ) ≥ J tCP (wt ). In other words, J(w ) is sandwiched between
∗ ∗

min0≤ i≤ t J(wi) and JCPt(wt) (see Figure 3.11 for an illustration). The cutting
plane method monitors the monotonically decreasing quantity
ϵt := min J(wi) − JCP
t
(wt), (3.97)
0≤i≤t

and terminates whenever ϵt falls below a predefined threshold ϵ. This ensures


that the solution J(wt ) is ϵ optimum, that is, J (w t ) ≤ J (w ∗) + ϵ.

Fig. 3.10. A convex function (blue solid curve) is bounded from below by its lin-
earizations (dashed lines). The gray area indicates the piecewise linear lower bound
obtained by using the linearizations. We depict a few iterations of the cutting plane
method. At each iteration the piecewise linear lower bound is minimized and a new
linearization is added at the minimizer (red rectangle). As can be seen, adding more
linearizations improves the lower bound.

Although cutting plane method was shown to be convergent [Kel60], it is


124 3 Optimization

Fig. 3.11. A convex function (blue solid curve) with four linearizations evaluated at
four different locations (magenta circles). The approximation gap ϵ3 at the end of
fourth iteration is indicated by the height of the cyan horizontal band i.e., difference
between lowest value of J(w) evaluated so far and the minimum of J4CP(w) (red
diamond).

well known (see e.g., [LNN95, Bel05]) that it can be very slow when new
iterates move too far away from the previous ones (i.e., causing unstable “zig-
zag” behavior in the iterates). In fact, in the worst case the cutting plane
method might require exponentially many steps to converge to an ϵ optimum
solution.
Bundle methods stabilize CPM by augmenting the piecewise linear lower
(e.g., JCP
t (w) in (3.95)) with a prox-function (i.e., proximity control func- tion)
which prevents overly large steps in the iterates [Kiw90]. Roughlyspeaking,
there are 3 popular types of bundle methods, namely, proximal [Kiw90], trust
region [SZ92], and level set [LNN95]. All three versions use
2
2 ǁ·ǁ as their prox-function, but differ in the way they compute the new
1

iterate:
proximal: w := argmin ζt w − ŵ ǁ2 + J CP (w)}, (3.98)

t ǁ { t− 1 t
2
w
CP
trust region: w := argmin{J (w) 1 w − ŵ ǁ2 ≤ κ }, (3.99)

t t | ǁ t− 1 t
w 2

level set: w := argmin{ 1 ǁw − ŵ ǁ |J (w) ≤ τ }, (3.100)


2 CP
t t− 1 t t
w 2

where ŵt− 1 is the current prox-center, and ζt , κt , and τt are positive trade-
off parameters of the stabilization. Although (3.98) can be shown to be
equivalent to (3.99) for appropriately chosen ζt and κt, tuning ζt is rather
difficult while a trust region approach can be used for automatically tuning
3.3 Constrained Optimization 125

κt. Consequently the trust region algorithm BT of [SZ92] is widely used in


practice.

3.3 Constrained Optimization


So far our focus was on unconstrained optimization problems. Many ma-
chine learning problems involve constraints, and can often be written in the
following canonical form:
min J(w) (3.101a)
w

s. t. c i(w) ≤ 0 for i ∈ I (3.101b)


ei(w) = 0 for i ∈ E (3.101c)

where both ci and ei are convex functions. We say that w is feasible if and
only if it satisfies the constraints, that is, ci(w) ≤ 0 for i ∈ I and ei(w) = 0
for i ∈ E.
Recall that w is the minimizer of an unconstrained problem if and only if
ǁ∇J(w)ǁ = 0 (see Lemma 3.6). Unfortunately, when constraints are present
one cannot use this simple characterization of the solution. For instance, the
w at which ǁ∇J(w)ǁ = 0 may not be a feasible point. To illustrate, consider
the following simple minimization problem (see Figure 3.12):
1
min w2 (3.102a)
w 2
s. t. 1 ≤ w ≤ 2. (3.102b)

Clearly, 12w2 is minimized at w = 0, but because of the presence of the con-


straints, the minimum of (3.102) is attained at w = 1 where ∇J(w) = w is
equal to 1. Therefore, we need other ways to detect convergence. In Section
3.3.1 we discuss some general purpose algorithms based on the concept of or-
thogonal projection. In Section 3.3.2 we will discuss Lagrange duality, which
can be used to further characterize the solutions of constrained optimization
problems.

3.3.1 Projection Based Methods


Suppose we are interested in minimizing a smooth convex function of the
following form:
min J(w), (3.103)
w∈ Ω
126 3 Optimization

14

12

10

8
J(w)

0
6 4 2 0 2 4 6
w

Fig. 3.12. The unconstrained minimum of the quadratic function 1 w2 2 is attained at


w = 0 (red circle). But, if we enforce the constraints 1 w≤ 2 ≤
(illustrated by the
shaded area) then the minimizer is attained at w = 1 (green diamond).

where Ω is a convex feasible region. For instance, Ω may be described by


convex functions ci and ei as in (3.101). The algorithms we describe in this
section are applicable when Ω is a relatively simple set onto which we can
compute an orthogonal projection. Given a point wJ and a feasible region
Ω, the orthogonal projection PΩ(wJ) of wJ on Ω is defined as
2
PΩ(wJ) := argmin ¨wJ w −̈ . (3.104)
w∈ Ω

Geometrically speaking, PΩ(wJ) is the closest point to wJ in Ω. Of course, if


wJ ∈ Ω then PΩ(wJ) = wJ.
We are interested in finding an approximate solution of (3.103), that is,
a w ∈ Ω such that

J(w) —min J (w) = J (w) − J ∗ ≤ ϵ, (3.105)


w∈ Ω

for some pre-defined tolerance ϵ > 0. Of course, J ∗ is unknown and hence the
gap J (w) − J ∗ cannot be computed in practice. Furthermore, as we showed in
Section 3.3, for constrained optimization problems ǁ∇J(w)ǁ does not
vanish at the optimal solution. Therefore, we will use the following stopping
3.3 Constrained Optimization 127

Algorithm 3.7 Basic Projection Based Method


1: Input: Initial point w0 ∈ Ω, and projected gradient norm tolerance
ϵ>0
2: Initialize: t = 0
3: while ǁPΩ(wt − ∇J(wt)) − wtǁ > ϵ do
4: Find direction of descent dt
5: wt+1 = PΩ(wt + ηtd t)
6: t= t+1
7: end while
8: Return: w t

criterion in our algorithms

ǁPΩ(wt − ∇J(wt)) − w tǁ ≤ ϵ. (3.106)

The intuition here is as follows: If wt − ∇J(wt) ∈ Ω then PΩ(wt −


∇J(wt )) = wt if, and only if, ∇J(wt) = 0, that is, wt is the global minimizer
of J (w). On the other hand, if w t − ∇ J(wt ) ∈
/ Ω but PΩ (w t − ∇J (w t )) = w t ,
then the constraints are preventing us from making any further progress
along the descent direction −∇J(wt) and hence we should stop.
The basic projection based method is described in Algorithm 3.7. Any
unconstrained optimization algorithm can be used to generate the direction
of descent dt. A line search is used to find the stepsize ηt. The updated
parameter wt − ηtdt is projected onto Ω to obtain wt+1 . If dt is chosen to
be the negative gradient direction −∇J(wt), then the resulting algorithm is
called the projected gradient method. One can show that the rates of
convergence of gradient descent with various line search schemes is also
preserved by projected gradient descent.

3.3.2 Lagrange Duality


Lagrange duality plays a central role in constrained convex optimization.
The basic idea here is to augment the objective function (3.101) with a
weighted sum of the constraint functions by defining the Lagrangian:
Σ Σ
L(w, α, β) = J(w) + αici(w) + βiei(w) (3.107)
i∈ I i∈ E

for αi ≥ 0 and βi ∈ R. In the sequel, we will refer to α (respectively β) as the


Lagrange multipliers associated with the inequality (respectively equality)
constraints. Furthermore, we will call α and β dual feasible if and only if
128 3 Optimization

αi ≥ 0 and βi ∈ R. The Lagrangian satisfies the following fundamental


property, which makes it extremely useful for constrained optimization.

Theorem 3.20 The Lagrangian (3.107) of (3.101) satisfies


(
J (w) if w is feasible
max L(w, α, β) =
α≥ 0,β ∞ otherwise.

In particular, if J ∗ denotes the optimal value of (3.101), then


J ∗ = min max L(w, α, β).
w α≥0,β

Proof First assume that w is feasible, that is, ci(w) ≤ 0 for i ∈ I and
ei(w) = 0 for i ∈ E. Since αi ≥ 0 we have
Σ Σ
αici(w) + βiei(w) ≤ 0, (3.108)
i∈ I i∈ E

with equality being attained by setting αi = 0 whenever ci(w) < 0. Conse-


quently,
Σ Σ
max L(w, α, β) = max J(w) + αici(w) + βiei(w) = J(w)
α≥ 0,β α≥ 0,β
i∈ I i∈ E

whenever w is feasible. On the other hand, if w is not feasible then either


ci′ (w) > 0 or ei′ (w) /= 0 for some i . In the first case simply let αi′ → ∞ to
J

see that maxα≥0,β L(w, α, β) → ∞. Similarly, when ei′ (w) = / 0 let βi′ → ∞
if ei′ (w) > 0 or βi′ → −∞ if ei′ (w) < 0 to arrive at the same conclusion.
If define the Lagrange dual function
D(α, β) = min L(w, α, β), (3.109)
w

for α ≥ 0 and β, then one can prove the following property, which is often
called as weak duality.

Theorem 3.21 (Weak Duality) The Lagrange dual function (3.109) sat-
isfies

D(α, β) ≤ J(w)

for all feasible w and α ≥ 0 and β. In particular


D ∗ := max min L(w, α, β) ≤ min max L(w, α, β) = J ∗. (3.110)
α≥ 0,β w w α≥0,β
3.3 Constrained Optimization 129
Proof As before, observe that whenever w is feasible
Σ Σ
αici(w) + βiei(w) ≤ 0.
i∈ I i∈ E

Therefore

Σ Σ
D(α, β) = min L(w, α, β) = min J(w) + αici(w) + βiei(w) ≤ J(w)
w w
i∈ I i∈ E

for all feasible w and α ≥ 0 and β. In particular, one can choose w to be


the minimizer of (3.101) and α ≥ 0 and β to be maximizers of D(α, β) to obtain
(3.110).
Weak duality holds for any arbitrary function, not-necessarily convex. When
the objective function and constraints are convex, and certain technical con-
ditions, also known as Slater’s conditions hold, then we can say more.

Theorem 3.22 (Strong Duality) Supposed the objective function f and


constraints ci for i ∈ I and e i for i ∈ E in (3.101) are convex and the
following constraint qualification holds:
There exists a w such that c i(w) < 0 for all i ∈ I.
Then the Lagrange dual function (3.109) satisfies
D ∗ := max min L(w, α, β) = min max L(w, α, β) = J ∗. (3.111)
α≥ 0,β w w α≥0,β

The proof of the above theorem is quite technical and can be found in
any standard reference (e.g., [BV04]). Therefore we will omit the proof and
proceed to discuss various implications of strong duality. First note that
min max L(w, α, β) = max min L(w, α, β). (3.112)
w α≥ 0,β α≥ 0,β w

In other words, one can switch the order of minimization over w with max-
imization over α and β. This is called the saddle point property of convex
functions.
Suppose strong duality holds. Given any α ≥ 0 and β such that D(α, β) >
−∞ and a feasible w we can immediately write the duality gap
J(w) − J = J (w) − D ≤ J (w) − D(α, β),
∗ ∗

where J ∗ and D ∗ were defined in (3.111). Below we show that if w ∗ is primal


optimal and (α∗, β ∗) are dual optimal then J (w ∗) − D(α∗, β ∗) = 0. This
provides a non-heuristic stopping criterion for constrained optimization: stop
when J(w) − D(α, β) ≤ ϵ, where ϵ is a pre-specified tolerance.
130 3 Optimization

Suppose the primal and dual optimal values are attained at w∗ and(α∗,
β∗) respectively, and consider the following line of argument:
∗ ∗ ∗
J(w ) = D(α , β ) (3.113a)
Σ Σ
= min J (w) + α∗ici (w) + βi∗ej (w) (3.113b)
w i∈ I i∈ E
Σ Σ
≤ J (w ) + α∗ici (w ) + βi∗ei (w )
∗ ∗ ∗
(3.113c)
i∈ I i∈ E
≤ J (w ∗). (3.113d)

To write (3.113a) we used strong duality, while (3.113c) obtains by setting


w = w∗ in (3.113c). Finally, to obtain (3.113d) we used the fact that w∗ is
feasible and hence (3.108) holds. Since (3.113) holds with equality, one can
conclude that the following complementary slackness condition:
Σ Σ
α∗ici (w ) + βi∗ei (w ) = 0.
∗ ∗

i∈ I i∈ E

In other words, α∗ici (w ∗) = 0 or equivalently α∗i = 0 whenever ci (w) < 0.


Furthermore, since w∗ minimizes L(w, α∗, β∗) over w, it follows that its gradient
must vanish at w∗, that is,
Σ Σ
∇J (w ∗) + α∗i∇ci (w ∗) + βi∗∇ei (w ∗) = 0.
i∈ I i∈ E

Putting everything together, we obtain

ci (w ∗) ≤ 0 ∀i ∈ I (3.114a)
ej (w ∗) = 0 ∀i ∈ E (3.114b)
α∗i ≥ 0 (3.114c)
α∗ici (w ∗) =0 (3.114d)
Σ Σ
∇J (w ∗) + α∗i∇ci (w ∗) + βi∗∇ei (w ∗) = 0. (3.114e)
i∈ I i∈ E

The above conditions are called the KKT conditions. If the primal problem is
convex, then the KKT conditions are both necessary and sufficient. In other
words, if ŵ and (α̂, β̂) satisfy (3.114) then ŵ and (α̂, β̂) are primal and dual
optimal with zero duality gap. To see this note that the first two conditions
show that ŵ is feasible. Since αi ≥ 0, L(w, α, β) is convex in w. Finally the
last condition states that ŵ minimizes L(w, α̂, β̂). Since α̂i ci (ŵ) = 0 and
3.3 Constrained Optimization 131

ej (ŵ) = 0, we have

D(α̂, β̂) = min L(w, α̂, β̂)


w n m
Σ Σ
= J (ŵ) + α̂i ci (ŵ) + β̂j ej (ŵ)
i=1 j=1

= J (ŵ).

3.3.3 Linear and Quadratic Programs


So far we discussed general constrained optimization problems. Many ma-
chine learning problems have special structure which can be exploited fur-
ther. We discuss the implication of duality for two such problems.

3.3.3.1 Linear Programming


An optimization problem with a linear objective function and (both equality
and inequality) linear constraints is said to be a linear program (LP). A
canonical linear program is of the following form:

min cT w (3.115a)
w

s. t. Aw = b, w ≥ 0. (3.115b)

Here w and c are n dimensional vectors, while b is a m dimensional vector,


and A is a m × n matrix with m < n.
Suppose we are given a LP of the form:

min cT w (3.116a)
w

s. t. Aw ≥ b, (3.116b)

we can transform it into a canonical LP by introducing non-negative slack


variables

min cT w (3.117a)
w,ξ

s. t. Aw − ξ = b, ξ ≥ 0. (3.117b)

Next, we split w into its positive and negative parts w+ and w− respec-
i = max(0, w i ) and w i = max(0, −w i ). Using these new

tively by setting w +
132 3 Optimization

variables we rewrite (3.117) as

T
c w+
min −c w− (3.118a)
w+,w−, ξ
0 ξ

w+ w+
s. t. A −A −I w− = b, w− ≥ 0, (3.118b)
ξ ξ

thus yielding a canonical LP (3.115) in the variables w+, w and ξ.
By introducing non-negative Lagrange multipliers α and β one can write
the Lagrangian of (3.115) as

L(w, β, s) = cT w + β T (Aw − b) − αT w. (3.119)

Taking gradients with respect to the primal and dual variables and setting
them to zero obtains

AT β − α = c (3.120a)
Aw = b (3.120b)
T
α w=0 (3.120c)
w≥0 (3.120d)
α ≥ 0. (3.120e)

Condition (3.120c) can be simplified by noting that both w and α are con-
strained to be non-negative, therefore αT w = 0 if, and only if, αi w i = 0 for
i = 1, . . . , n.
Using (3.120a), (3.120c), and (3.120b) we can write

cT w = (AT β − α)T w = β T Aw = β T b.

Substituting this into (3.115) and eliminating the primal variable w yields
the following dual LP

max bT β (3.121a)
α,β

s.t. A β − α = c, α ≥ 0.
T
(3.121b)

As before, we let β+ = max(β, 0) and β = max(0, −β) and convert the



3.3 Constrained Optimization 133

above LP into the following canonical LP


T
b β+
max −b β− (3.122a)
α,β+,β−
0 α

β+ β+
s.t. AT
−A T
−I β− = c, β

≥ 0. (3.122b)
α α
It can be easily verified that the primal-dual problem is symmetric; by taking
the dual of the dual we recover the primal (Problem 3.17). One important
thing to note however is that the primal (3.115) involves n variables and
n + m constraints, while the dual (3.122) involves 2m + n variables and 4m
+ 2n constraints.

3.3.3.2 Quadratic Programming


An optimization problem with a convex quadratic objective function and lin-
ear constraints is said to be a convex quadratic program (QP). The canonical
convex QP can be written as follows:
1 T T
min w Gx + w d (3.123a)
w 2

s.t. ai w = bi for i ∈ E
T
(3.123b)
ai w ≤ bi for i ∈ I
T
(3.123c)

Here G ≥ 0 is a n × n positive semi-definite matrix, E and I are finite set of


indices, while d and ai are n dimensional vectors, and bi are scalars.
As a warm up let us consider the arguably simpler equality constrained
quadratic programs. In this case, we can stack the ai into a matrix A and
the bi into a vector b to write
1 T T
min w Gw + w d (3.124a)
w 2

s.t. Aw = b (3.124b)

By introducing non-negative Lagrange multipliers β the Lagrangian of the


above optimization problem can be written as

L(w, β) = 1 wT Gw + wT d + β(Aw − b). (3.125)


2

To find the saddle point of the Lagrangian we take gradients with respect
134 3 Optimization

to w and β and set them to zero. This obtains


Gw + d + AT β = 0
Aw = b.
Putting these two conditions together yields the following linear system of
equations
G AT w −d
= . (3.126)
A 0 β
b
The matrix in the above equation is called the KKT matrix, and we can use
it to characterize the conditions under which (3.124) has a unique solution.

Theorem 3.23 Let Z be a n × (n − m) matrix whose columns form a basis


for the null space of A, that is, AZ = 0. If A has full row rank, and the
T
reduced-Hessian matrix Z GZ is positive definite, then there exists a unique
∗ ∗ ∗
pair (w , β ) which solves (3.126). Furthermore, w also minimizes (3.124).
∗ ∗
Proof Note that a unique (w , β ) exists whenever the KKT matrix is non-
singular. Suppose this is not the case, then there exist non-zero vectors a and
b such that
G AT a
= 0.
A 0
b
Since Aa = 0 this implies that a lies in the null space of A and hence there
exists a u such that a = Zu. Therefore
G AT Zu T T
Zu 0 = u Z GZu = 0.
A 0 0

Positive definiteness of ZT GZ implies that u = 0 and hence a = 0. On the


other hand, the full row rank of A and AT b = 0 implies that b = 0. In
summary, both a and b are zero, a contradiction.
Let w /= w∗ be any other feasible point and ∆w = w∗ − w. Since Aw∗ =
Aw = b we have that A∆w = 0. Hence, there exists a non-zero u such that
∆w = Zu. The objective function J(w) can be written as

J(w) = 1 (w∗ — ∆w)T G(w∗ — ∆w) + (w∗ — ∆w)T d


2
1
= J (w ∗ ) + ∆wT G∆w − (Gw∗ + d)T ∆w.
2

First note that 1 ∆w T G∆w = 1 uT Z T GZu > 0 by positive definiteness of


2 2
the reduced Hessian. Second, since w∗ solves (3.126) it follows that (Gw∗ +
3.4 Stochastic Optimization 135
T T
d) ∆w = β A∆w = 0. Together these two observations imply that J (w) >
J(w ∗).
If the technical conditions of the above theorem are met, then solving the
equality constrained QP (3.124) is equivalent to solving the linear system
(3.126). See [NW99] for a extensive discussion of algorithms that can be used
for this task.
Next we turn our attention to the general QP (3.123) which also contains
inequality constraints. The Lagrangian in this case can be written as
Σ Σ
1 T
L(w, β) = w Gw + w d + T
αi (ai w − bi ) +
T
i w − b i ). (3.127)
βi (aT
2 i∈ I i∈ E

∗ ∗
Let w denote the minimizer of (3.123). If we define the active set A(w ) as
, ,
A(w ∗) = i s.t. i ∈ I and aT ∗
i w = bi ,

then the KKT conditions (3.114) for this problem can be written as
ai w − bi < 0 ∀i ∈ I \ A(w )
T ∗
(3.128a)
ai w − bi = 0 ∀i ∈ E ∪ A(w ) (3.128b)
T ∗

α∗i ≥ 0 ∀i ∈ A(w ∗) (3.128c)


Σ Σ
Gw∗ + d + α∗iai + βi ai = 0. (3.128d)
i∈ A(w ∗ ) i∈ E

Conceptually the main difficulty in solving (3.123) is in identifying the active


set A(w ). This is because α∗i = 0 for all i ∈ I \ A(w ). Most algorithms
∗ ∗

for solving (3.123) can be viewed as different ways to identify the active set.
See [NW99] for a detailed discussion.

3.4 Stochastic Optimization


Recall that regularized risk minimization involves a data-driven optimization
problem in which the objective function involves the summation of loss terms
over a set of data to be modeled:
1 Σm
min J(f ) := λΩ(f ) + l(f (x i), y i).
f m i=1

Classical optimization techniques must compute this sum in its entirety for
each evaluation of the objective, respectively its gradient. As available data
sets grow ever larger, such “batch” optimizers therefore become increasingly
inefficient. They are also ill-suited for the incremental setting, where partial
data must be modeled as it arrives.
136 3 Optimization

Stochastic gradient-based methods, by contrast, work with gradient esti-


mates obtained from small subsamples (mini-batches) of training data. This
can greatly reduce computational requirements: on large, redundant data
sets, simple stochastic gradient descent routinely outperforms sophisticated
second-order batch methods by orders of magnitude.
The key idea here is that J(w) is replaced by an instantaneous estimate
Jt which is computed from a mini-batch of size k comprising of a subset of
points (xti, yt)i with i = 1, . . . , k drawn from the dataset:


k
J t (w) = λΩ(w) + l(w, xti, yti). (3.129)
k i=1
Setting k = 1 obtains an algorithm which processes data points as they
arrive.

3.4.1 Stochastic Gradient Descent


Perhaps the simplest stochastic optimization algorithm is Stochastic Gradi-
ent Descent (SGD). The parameter update of SGD takes the form:
wt+1 = wt − ηt∇Jt(wt). (3.130)
If Jt is not differentiable, then one can choose an arbitrary subgradient from
∂Jt (w t ) to compute the update. It has been shown that SGD asymptotic√ ally
converges to the true minimizer of J(w) if the stepsize ηt decays as O(1/ t).
For instance, one could set
r τ
η = , (3.131)
t
τ +t

where τ > 0 is a tuning parameter. See Algorithm 3.8 for details.

3.4.1.1 Practical Considerations


One simple yet effective rule of thumb to tune τ is to select a small subset
of data, try various values of τ on this subset, and choose the τ that most
reduces the objective function.
In some cases letting ηt to decay as O(1/t) has been found to be more
effective:
τ
ηt = . (3.132)
τ +t
The free parameter τ > 0 can be tuned as described above. If Ω(w) is σ-
strongly convex, then dividing the stepsize ηt by σλ yields good practical
performance.
3.5 Nonconvex Optimization 137

Algorithm 3.8 Stochastic Gradient Descent


1: Input: Maximum iterations T , batch size k, and τ
2: Set t = 0 and w0 = 0
3: while t < T do
4: Choose a subset of k data points (xt, yt) and compute ∇Jt(wt)
q i i
5: Compute stepsize ηt = τ τ+t
6: wt+1 = wt − ηt∇Jt(wt)
7: t= t+1
8: end while
9: Return: wT

3.5 Nonconvex Optimization


Our focus in the previous sections was on convex objective functions. Some-
times non-convex objective functions also arise in machine learning applica-
tions. These problems are significantly harder and tools for minimizing such
objective functions are not as well developed. We briefly describe one algo-
rithm which can be applied whenever we can write the objective function as
a difference of two convex functions.

3.5.1 Concave-Convex Procedure


Any function with a bounded Hessian can be decomposed into the difference
of two (non-unique) convex functions, that is, one can write

J(w) = f (w) − g(w), (3.133)

where f and g are convex functions. Clearly, J is not convex, but there
exists a reasonably simple algorithm namely the Concave-Convex Procedure
(CCP) for finding a local minima of J. The basic idea is simple: In the
tth iteration replace g by its first order Taylor expansion at wt, that is, g(wt)
+ ⟨ w − wt, ∇g(wt)⟩ and minimize

Jt(w) = f (w) − g(wt) − ⟨ w − wt, ∇g(wt)⟩ . (3.134)

Taking gradients and setting it to zero shows that Jt is minimized by setting

∇f (wt+1) = ∇g(wt). (3.135)

The iterations of CCP on a toy minimization problem is illustrated in Figure


3.13, while the complete algorithm listing can be found in Algorithm 3.9.
138 3 Optimization

10 200

20
150
30

100
40

50
50

60

70

80 50
1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0

Fig. 3.13. Given the function on the left we decompose it into the difference of two
convex functions depicted on the right panel. The CCP algorithm generates iterates
by matching points on the two convex curves which have the same tangent vectors.
As can be seen, the iterates approach the solution x = 2.0.

Algorithm 3.9 Concave-Convex Procedure


1: Input: Initial point w0, maximum iterations T , convex functions f ,g
2: Set t = 0
3: while t < T do
4: Set wt+1 = argminw f (w) − g(wt) − ⟨ w − wt, ∇g(wt)⟩
5: t= t+1
6: end while
7: Return: wT

Theorem 3.24 Let J be a function which can be decomposed into a differ-


ence of two convex functions e.g., (3.133). The iterates generated by (3.135)
monotically decrease J. Furthermore, the stationary point of the iterates is
a local minima of J.

Proof Since f and g are convex

f (wt) ≥ f (wt+1) + ⟨ wt − wt+1, ∇f (wt+1 )⟩


g(wt+1) ≥ g(wt) + ⟨ wt+1 − wt, ∇g(wt)⟩ .

Adding the two inequalities, rearranging, and using (3.135) shows that J(wt) =
f (wt) − g(wt) ≥ f (wt+1) − g(wt+1) = J(wt+1), as claimed.
∗ ∗ ∗
Let w be a stationary point of the iterates. Then ∇f (w ) = ∇g(w ),which
in turn implies that w ∗ is a local minima of J because ∇ J(w ∗) = 0.

There are a number of extensions to CCP. We mention only a few in the


passing. First, it can be shown that all instances of the EM algorithm (Sec- tion
??) can be shown to be special cases of CCP. Second, the rate of con-
3.6 Some Practical Advice 139

vergence of CCP is related to the eigenvalues of the positive semi-definite


matrix ∇ 2(f + g). Third, CCP can also be extended to solve constrained
problems of the form:
min f0(w) − g0(w)
w

s.t. fi(w) − gi(w) ≤ ci for i = 1, . . . , n.


where, as before, fi and gi for i = 0, 1, . . . , n are assumed convex. At every
iteration, we replace gi by its first order Taylor approximation and solve the
following constrained convex problem:
min f0(w) − g0(wt) + ⟨ w − wt, ∇g0(wt)⟩
w

s.t. fi(w) − g i(wt) + ⟨ w − wt, ∇g i(wt)⟩ ≤ c i for i = 1, . . . , n.

3.6 Some Practical Advice


The range of optimization algorithms we presented in this chapter might be
somewhat intimidating for the beginner. Some simple rules of thumb can
alleviate this anxiety

Code Reuse: Implementing an efficient optimization algorithm correctly is


both time consuming and error prone. Therefore, as far as possible use
existing libraries. A number of high class optimization libraries both com-
mercial and open source exist.

Unconstrained Problems: For unconstrained minimization of a smooth


convex function LBFGS (Section 3.2.6.1 is the algorithm of choice. In many
practical situations the spectral gradient method (Section 3.2.6.2) is also
very competitive. It also has the added advantage of being easy to imple-
ment. If the function to be minimized is non-smooth then Bundle methods
(Section 3.2.7) are to be preferred. Amongst the different formulations, the
Bundle Trust algorithm tends to be quite robust.

Constrained Problems: For constrained problems it is very important


to understand the nature of the constraints. Simple equality (Ax = b) and
box (l ≤ x ≤ u) constraints are easier to handle than general non-linear
constraints. If the objective function is smooth, the constraint set Ω is simple,
and orthogonal projections PΩ are easy to compute, then spectral projected
gradient (Section 3.3.1) is the method of choice. If the optimization problem
is a QP or an LP then specialized solvers tend to be much faster than general
purpose solvers.
140 3 Optimization

Large Scale Problems: If your parameter vector is high dimensional then


consider coordinate descent (Section 3.2.2) especially if the one dimensional
line search along a coordinate can be carried out efficiently. If the objective
function is made up of a summation of large number of terms, consider
stochastic gradient descent (Section 3.4.1). Although both these algorithms
do not guarantee a very accurate solution, practical experience shows that
for large scale machine learning problems this is rarely necessary.

Duality: Sometimes problems which are hard to optimize in the primal may
become simpler in the dual. For instance, if the objective function is strongly
convex but non-smooth, its Fenchel conjugate is smooth with a Lipschitz
continuous gradient.

Problems
Problem 3.1 (Intersection of Convex Sets {1}) If C1 and C2 are con-
vex sets, then show that C1 ∩ C2 is also convex. Extend your result to show
Tn
that i=1 Ci are convex if Ci are convex.
Problem 3.2 (Linear Transform of Convex Sets {1}) Given a set C ⊂
Rn and a linear transform A ∈ Rm× n, define AC := {y = Ax : x ∈ C}. If C
is convex then show that AC is also convex.

Problem 3.3 (Convex Combinations {1}) Show that a subset of Rn is


convex if and only if it contains all the convex combination of its elements.

Problem 3.4 (Convex Hull {2}) Show that the convex hull, conv(X) is
the smallest convex set which contains X.

Problem 3.5 (Epigraph of a Convex Function {2}) Show that a func-


tion satisfies Definition 3.3 if, and only if, its epigraph is convex.

Problem 3.6 Prove the Jensen’s inequality (3.6).

Problem 3.7 (Strong convexity of the negative entropy {3}) Show that
the negative entropy (3.15) is 1-strongly convex with respect to the ǁ·ǁ1 norm
− 2
on the simplex. Hint: First show that φ(t) := (t − 1) log t − 2 (t 1)t+1≥ 0 for
all t ≥ 0. Next substitute t = xi/yi to show that
Σ x
(xi − yi ) log i ≥ ǁx − yǁ21.
i
yi
3.6 Some Practical Advice 141

Problem 3.8 (Strongly Convex Functions {2}) Prove 3.16, 3.17, 3.18
and 3.19.

Problem 3.9 (Convex Functions with Lipschitz Continuous Gradient {2})


Prove 3.22, 3.23, 3.24 and 3.25.

Problem 3.10 (One Dimensional Projection {1}) If f : Rd → R is convex,


then show that for an arbitrary x and p in Rd the one dimensionalfunction
Φ(η) := f (x + ηp) is also convex.

Problem 3.11 (Quasi-Convex Functions {2}) In Section 3.1 we showed


that the below-sets of a convex function Xc := {x | f (x) ≤ c} are convex. Give
a counter-example to show that the converse is not true, that is, there exist
non-convex functions whose below-sets are convex. This class of functions is
called Quasi-Convex.

Problem 3.12 (Gradient of the p-norm {1}) Show that the gradient of
the p-norm (3.31) is given by (3.32).

Problem 3.13 Derive the Fenchel conjugate of the following functions


(
0 if x ∈ C
f (x) = where C is a convex set
∞ otherwise.
f (x) = ax + b

1 T
f (x) = x Ax where A is a positive definite matrix
2

f (x) = − log(x)
f (x) = exp(x)
f (x) = x log(x)

Problem 3.14 (Convergence of gradient descent {2}) Suppose J has


a Lipschitz continuous gradient with modulus L. Then show that Algorithm
3.2 with an inexact line search satisfying the Wolfe conditions (3.42) and
(3.43) will return a solution w t with ǁ∇J(wt)ǁ ≤ ϵ in at most O(1/ϵ2) iter-
ations.

Problem 3.15 Show that


Σ
1 + Tt=1 t1 1
ΣT √1 ≤ √
t=1 t
T
142 3 Optimization

Problem 3.16 (Coordinate Descent for Quadratic Programming {2})


Derive a projection based method which uses coordinate descent to generate
directions of descent for solving the following box constrained QP:
min 1 wT Qw + cT w
w∈ R n 2
s.t. l ≤ w ≤ u.
You may assume that Q is positive definite and l and u are scalars.

Problem 3.17 (Dual of a LP {1}) Show that the dual of the LP (3.122)
is (3.115). In other words, we recover the primal by computing the dual of
the dual.
4

Online Learning and Boosting

So far the learning algorithms we considered assumed that all the training
data is available before building a model for predicting labels on unseen data
points. In many modern applications data is available only in a streaming
fashion, and one needs to predict labels on the fly. To describe a concrete
example, consider the task of spam filtering. As emails arrive the learning
algorithm needs to classify them as spam or ham. Tasks such as these are
tackled via online learning. Online learning proceeds in rounds. At each
round a training example is revealed to the learning algorithm, which uses
its current model to predict the label. The true label is then revealed to
the learner which incurs a loss and updates its model based on the feedback
provided. This protocol is summarized in Algorithm 4.1. The goal of online
learning is to minimize the total loss incurred. By an appropriate choice
of labels and loss functions, this setting encompasses a large number of tasks
such as classification, regression, and density estimation. In our spam
detection example, if an email is misclassified the user can provide feedback
which is used to update the spam filter, and the goal is to minimize the
number of misclassified emails.

4.1 Halving Algorithm


The halving algorithm is conceptually simple, yet it illustrates many of the
concepts in online learning. Suppose we have access to a set of n experts,
that is, functions fi which map from the input space X to the output space
Y = {±1}. Furthermore, assume that one of the experts is consistent, that
is, there exists a j ∈ {1, . . . , n} such that fj(xt) = yt for t = 1, . . . , T . The
halving algorithm maintains a set Ct of consistent experts at time t. Initially
C0 = {1, . . . , n}, and it is updated recursively as
Ct+1 = {i ∈ Ct s.t. fi(xt+1) = yt+1} . (4.1)
The prediction on a new data point is computed via a majority vote amongst
the consistent experts: ŷt = majority(Ct ).

Lemma 4.1 The Halving algorithm makes at most log2 (n) mistakes.

143
144 4 Online Learning and Boosting

Algorithm 4.1 Protocol of Online Learning


1: for t = 1, . . . , T do do
2: Get training instance x t
3: Predict label ŷt
4: Get true label yt
5: Incur loss l(ŷt , xt , yt )
6: Update model
7: end for

Proof Let M denote the total number of mistakes. The halving algorithm
makes a mistake at iteration t if at least half the consistent experts Ct predict
the wrong label. This in turn implies that
|Ct| |C0 | n
|Ct+1 | ≤≤ M = M.
2 2 2
On the other hand, since one of the experts is consistent it follows that
1 ≤ |Ct+1|. Therefore, 2M ≤ n. Solving for M completes the proof.

4.2 Weighted Majority


We now turn to the scenario where none of the experts is consistent. There-
fore, the aim here is not to minimize the number mistakes but to minimize
regret.
In this chapter we will consider online methods for solving the following
optimization problem:
Σ
T
min J(w) where J(w) = ft(w). (4.2)
w∈ Ω
t=1
Suppose we have access to a function ψ which is continuously differentiable
and strongly convex with modulus of strong convexity σ > 0 (see Section
3.1.4 for definition of strong convexity), then we can define the Bregman
divergence (3.29) corresponding to ψ as
∆ψ(w, wJ) = ψ(w) − ψ(wJ) − w − wJ, ∇ψ(wJ) .
We can also generalize the orthogonal projection (3.104) by replacing the
square Euclidean norm with the above Bregman divergence:
Pψ,Ω(wJ) = argmin ∆ψ(w, wJ). (4.3)
w∈ Ω
4.2 Weighted Majority 145

Algorithm 4.2 Stochastic (sub)gradient Descent


1: Input: Initial point x1, maximum iterations T
2: for t = 1, . . . , T do
3: Compute ŵ t+1 = ∇ ψ ∗ (∇ ψ(w t ) − ηt gt ) with gt = ∂w ft (wt )
4: Set w t+1 = P ψ,Ω (ŵt+1 )
5: end for
6: Return: wT +1

Denote w∗ = Pψ,Ω(wJ). Just like the Euclidean distance is non-expansive, the


Bregman projection can also be shown to be non-expansive in the following
sense:

∆ψ(w, wJ) ≥ ∆ψ(w, w∗) + ∆ψ(w∗, wJ) (4.4)

for all w ∈ Ω. The diameter of Ω as measured by ∆ψ is given by


diamψ(Ω) = max ∆ψ(w, wJ). (4.5)
′ w,w ∈ Ω

For the rest of this chapter we will make the following standard assumptions:

• Each ft is convex and revealed atn time instance t.


• Ω is a closed convex subset of R with non-empty interior.
• The diameter diam ψ(Ω) of Ω is bounded by F < ∞.


The set of optimal solutions of (4.2) denoted by Ω is non-empty.
• The subgradient ∂wft(w) can be computed for every t and w ∈ Ω.
• The Bregman projection (4.3) can be computed for every w ∈ Rn.
J

• The gradient ∇ψ, and its inverse (∇ψ)− 1 = ∇ψ∗ can be computed.
The method we employ to solve (4.2) is given in Algorithm 4.2. Before
analyzing the performance of the algorithm we would like to discuss three
special cases. First, Euclidean distance squared which recovers projected
stochastic gradient descent, second Entropy which recovers Exponentiated
gradient descent, and third the p-norms for p > 2 which recovers the p-norm
Perceptron. BUGBUG TODO.
Our key result is Lemma 4.3 given below. It can be found in various guises
in different places most notably Lemma 2.1 and 2.2 in [?], Theorem 4.1 and
Eq. (4.21) and (4.15) in [?], in the proof of Theorem 1 of [?], as well as Lemma
3 of [?]. We prove a slightly general variant; we allow for projections with
an arbitrary Bregman divergence and also take into account a generalized
version of strong convexity of ft. Both these modifications will allow us to deal
with general settings within a unified framework.
146 4 Online Learning and Boosting

Definition 4.2 We say that a convex function f is strongly convex with


respect to another convex function ψ with modulus λ if
f (w) − f (wJ) − w − wJ, µ ≥ λ∆ψ(w, wJ) for all µ ∈ ∂f (wJ). (4.6)

The usual notion of strong convexity is recovered by setting ψ(·) = 21 ǁ·ǁ2.

Lemma 4.3 Let ft be strongly convex with respect to ψ with modulus λ ≥ 0


for all t. For any w ∈ Ω the sequences generated by Algorithm 4.2 satisfy
η2
∆ ≤(w, w ) ∆ (w, w ) η g , wt− w+ ǁgtǁ 2 (4.7)

⟨ − t⟩+1 t t t
ηt2 2
tψ ψ
≤ (1 − ηtλ)∆ψ(w, wt) − ηt(ft(wt) − ft(w)) + ǁg tǁ . (4.8)

Proof We prove the result in three steps. First we upper bound ∆ ψ(w, wt+1) by
∆ψ (w, ŵt+1 ). This is a consequence of (4.4) and the non-negativity of the
Bregman divergence which allows us to write
∆ψ (w, w t+1 ) ≤ ∆ψ (w, ŵ t+1 ). (4.9)
In the next step we use Lemma 3.11 to write
∆ψ (w, w t ) + ∆ψ (w t , ŵ t+1 ) − ∆ψ (w, ŵt+1 ) = ⟨ ∇ψ(ŵ t+1 ) − ∇ ψ(w t ), w − w t ⟩ .
Since ∇ψ∗ = (∇ψ)− 1, the update in step 3 of Algorithm 4.2 can equivalently be
written as ∇ ψ(ŵ t+1 ) − ∇ ψ(w t ) = −ηt gt . Plugging this in the above
equation and rearranging
∆ψ (w, ŵ t+1 ) = ∆ψ (w, wt ) − ηt ⟨ gt , wt − w⟩ + ∆ψ (w t , ŵ t+1 ). (4.10)
Finally we upper bound ∆ψ (w t , ŵt+1 ). For this we need two observations:
First, ⟨ x, y⟩ ≤ 1 ǁxǁ2 + σ ǁyǁ2 for all x, y ∈ Rn and σ > 0. Second, the
σ 2σ 2 σ 2
strong convexity of ψ allows us to bound ∆ψ (ŵ t+1 , w t ) ≥ 2 ǁwt − ŵ t+1 ǁ .
Using these two observations
∆ψ (wt , ŵ t+1 ) = ψ(wt ) − ψ(ŵ t+1 ) − ⟨ ∇ψ(ŵ t+1 ), wt − ŵ t+1 ⟩

= −(ψ(ŵ t+1 ) − ψ(wt ) − ⟨ ∇ψ(w t ), ŵ t+1 − w t ⟩ ) + ⟨ ηt gt , w t − ŵ t+1 ⟩


= −∆ψ (ŵt+1 , w t ) + ⟨ ηt gt , wt − ŵ t+1 ⟩
σ 2 ηt2 2 σ 2
≤ − ǁw t − ŵt+1 ǁ + 2σ ǁ g ǁt + ǁw t − ŵ t+1 ǁ
2 2
2
η2
= t ǁgtǁ . (4.11)

Inequality (4.7) follows by putting together (4.9), (4.10), and (4.11), while
(4.8) follows by using (4.6) with f = ft and wJ = wt and substituting into
4.2 Weighted Majority 147

(4.7).
Now we are ready to prove regret bounds.

Lemma 4.4 Let w ∈ Ω denote the best parameter chosen in hindsight, and
∗ ∗

let ǁgtǁ ≤ L for all t. Then the regret of Algorithm 4.2 can be bounded via
Σ
T 1 Σ2 T
ft (wt ) − ft (w ) ≤ F ∗
− Tλ +L ηt. (4.12)
ηT 2σ
t=1 t=1

Proof Set w = w∗ and rearrange (4.8) to obtain


1 ηt
f (w ) − f (w∗) ≤ ((1 − λη )∆ (w∗, w ) − ∆ (w∗, w )) + ǁg ǁ2 .
t t t t ψ t ψ t+1 t
ηt 2σ

Summing over t
T T T
Σ Σ 1 Σ ηt
t ) t− f (w
t ) ≤ ((1 − η λ)∆t (w ψ, w ) − ∆
∗ ∗ ∗
f (w t (w ,ψw t+1 )) + ǁgt ǁ2 .
t=1
η
t=1 t t=1

` ˛¸ x
` ˛¸ x
T1 T2
Since the diameter of Ω is bounded by F and ∆ψ is non-negative

1 1 Σ
T
1 1
T1 = −λ ∆ (w , w ) −
∗ ∗
∆ψ(w , wT +1) + ∆ψ(w , wt)

− − λ
η1 ηt ηt− 1
ψ 1 t=2
ηTT

1 Σ 1 1


− λ ∆ψ(w , w1) + ∆ψ(w∗, wt) − − λ
η1 ηt ηt− 1
t=2

T
1 Σ 1 1 1
≤ − λ F+ F — − λ =F − Tλ .
η1 t=2
ηt ηt− 1 ηT

On the other hand, since the subgradients are Lipschitz continuous with
constant L it follows that
L2 Σ
T
T2 ≤ η t.
2σ t=1
Putting together the bounds for T1 and T2 yields (4.12).

1
Corollary 4.5 If λ > 0 and we set ηt = λt
then

T 2
Σ L
ft (xt ) − f t(x ) ≤

(1 + log(T )),
2σλ
t=1
148 4 Online Learning and Boosting

On the other hand, when λ = 0, if we set ηt = √1 then


t

Σ
T
L2 √
ft (xt ) − f t (x∗) ≤ F+ T.
t=1
σ

Proof First consider λ > 0 with ηt = 1 . In this case 1


= Tλ, and
λt ηT
consequently (4.12) specializes to
T 2 T 2
Σ L Σ1 L
f t(wt ) − f t(w ) ≤ ≤

(1 + log(T )).
t=1
2σλ t=1
t 2σλ

When λ = 0, and we set ηt = √1 and use problem 4.2 to rewrite (4.12) as


t

Σ √ L2 Σ 1 √ L2 √
T T

ft(wt) − ft(w ) ≤ F T+ √ ≤F σ
σ
t=1 t=1 2 t T+ T.

Problems
Problem
1 2 4.1
σ (Generalized
2 Cauchy-Schwartz
n {1}) Show that ⟨ x, y⟩ ≤
2σ ǁxǁ + 2 ǁyǁ for all x, y ∈ R and σ > 0.

Σb
Problem 4.2 (Bounding sum of a series {1}) Show that √1 ≤

t=a 2 t
b − a + 1. Hint: Upper bound the sum by an integral.
5

Conditional Densities

A number of machine learning algorithms can be derived by using condi-


tional exponential families of distribution (Section 2.3). Assume that the
training set {(x1, y1), . . . , (xm, ym)} was drawn iid from some underlying
distribution. Using Bayes rule (1.15) one can write the likelihood
Y
m
p(θ|X, Y ) ∝ p(θ)p(Y |X, θ) = p(θ) p(yi |x i , θ), (5.1)
i=1

and hence the negative log-likelihood


Σ
m
— log p(θ|X, Y ) = − log p(yi |xi , θ) − log p(θ) + const. (5.2)
i=1

Because we do not have any prior knowledge about the data, we choose a
zero mean unit variance isotropic normal distribution for p(θ). This yields

1 Σ m
— log p(θ|X, Y ) = ǁθǁ2 − log p(y |xi , iθ) + const. (5.3)
2
i=1

Finally, if we assume a conditional exponential family model for p(y|x, θ),


that is,

p(y|x, θ) = exp (⟨ φ(x, y), θ⟩ − g(θ|x)) , (5.4)

then
Σm
— log p(θ|X, Y ) = 1 ǁθǁ2 + g(θ|xi ) − ⟨ φ(xi , yi ), θ⟩ + const. (5.5)
2 i=1

where
Σ
g(θ|x) = log exp (⟨ φ(x, y), θ⟩ ) , (5.6)
y∈ Y

is the log-partition function. Clearly, (5.5) is a smooth convex objective


function, and algorithms for unconstrained minimization from Chapter 3

149
150 5 Conditional Densities

can be used to obtain the maximum aposteriori (MAP) estimate for θ. Given
the optimal θ, the class label at any given x can be predicted using
y∗ = argmax p(y x,
| θ). (5.7)
y

In this chapter we will discuss a number of these algorithms that can be


derived by specializing the above setup. Our discussion unifies seemingly
disparate algorithms, which are often discussed separately in literature.

5.1 Logistic Regression


We begin with the simplest case namely binary classification1. The key ob-
servation here is that the labels y ∈ {±1} and hence
g(θ|x) = log (exp (⟨ φ(x, +1), θ⟩ ) + exp (⟨ φ(x, −1), θ⟩ )) . (5.8)

Define φ̂(x) := φ(x, +1) − φ(x, −1). Plugging (5.8) into (5.4), using the
definition of φ̂ and rearranging
1
p(y = +1|x, θ) = D E and
ˆ
−φ(x), θ
1 + exp
D E
1
p(y = −1|x, θ) = 1 + exp φ̂(x), θ
,

or more compactly
1
p(y|x, θ) = D E . (5.9)
1 + exp ˆ θ
−yφ(x),

Since p(y|x, θ) is a logistic function, hence the name logistic regression. The
classification rule (5.7) in this case specializes as follows: predict +1 when-
ever p(y = +1|x, θ) ≥ p(y = −1|x, θ) otherwise predict −1. However
p(y = +1|x, θ) D E
log = φ̂(x), θ ,
p(y = −1|x, θ)
D E
therefore one can equivalently use sign φ̂(x), θ as our prediction func-
tion. Using (5.9) we can write the objective function of logistic regression
as
1 Σ m D E
2
ǁθǁ + log 1 + exp −yi φ̂(xi ), θ
2 i=1

1 The name logistic regression is a misnomer!


5.2 Regression 151

To minimize the above objective function we first compute the gradient.


D E
Σ
m exp −y i φ̂(x i ), θ
ˆ
−y i φ(x i ), θ
i=1 1 + exp
∇J (θ) = θ +Σm D E (−yi φ(xi ))
ˆ
=θ+ (p(yi |xi , θ) − 1)yi φ̂(xi ).
i=1

Notice that the second term of the gradient vanishes whenever p(yi |xi , θ) =
1. Therefore, one way to interpret logistic regression is to view it as a method
to maximize p(yi |xi , θ) for each point (xi , yi ) in the training set. Since the
objective function of logistic regression is twice differentiable one can also
compute its Hessian
Σ
m
∇ 2 J(θ) = I − p(yi |xi , θ)(1 − p(yi |xi , θ))φ̂(x i )φ̂(xi )T ,
i=1

where we used yi2 = 1. The Hessian can be used in the Newton method

(Section 3.2.6) to obtain the optimal parameter θ.

5.2 Regression
5.2.1 Conditionally Normal Models
fixed variance

5.2.2 Posterior Distribution


integrating out vs. Laplace approximation, efficient estimation (sparse greedy)

5.2.3 Heteroscedastic Estimation


explain that we have two parameters. not too many details (do that as an
assignment).

5.3 Multiclass Classification


5.3.1 Conditionally Multinomial Models
joint feature map
152 5 Conditional Densities

5.4 What is a CRF?


• Motivation with learning a digit example
• general definition
• Gaussian process + structure = CRF

5.4.1 Linear Chain CRFs


• Graphical model
• Applications
• Optimization problem

5.4.2 Higher Order CRFs


• 2-d CRFs and their applications in vision
• Skip chain CRFs
• Hierarchical CRFs (graph transducers, sutton et. al. JMLR etc)

5.4.3 Kernelized CRFs


• From feature maps to kernels
• The clique decomposition theorem
• The representer theorem
• Optimization strategies for kernelized CRFs

5.5 Optimization Strategies


5.5.1 Getting Started
• three things needed to optimize
– MAP estimate
– log-partition function
– gradient of log-partition function
• Worked out example (linear chain?)

5.5.2 Optimization Algorithms


- Optimization algorithms (LBFGS, SGD, EG (Globerson et. al))

5.5.3 Handling Higher order CRFs


- How things can be done for higher order CRFs (briefly)
5.6 Hidden Markov Models 153

5.6 Hidden Markov Models


• Definition
• Discuss that they are modeling joint distribution p(x, y)
• The way they predict is by marginalizing out x
• Why they are wasteful and why CRFs generally outperform them

5.7 Further Reading


What we did not talk about:
• Details of HMM optimization
• CRFs applied to predicting parse trees via matrix tree theorem (collins,
koo et al)
• CRFs for graph matching problems
• CRFs with Gaussian distributions (yes they exist)

5.7.1 Optimization
issues in optimization (blows up with number of classes). structure is not
there. can we do better?

Problems
Problem 5.1 Poisson models

Problem 5.2 Bayes Committee Machine

Problem 5.3 Newton / CG approach


6

Kernels and Function Spaces

Kernels are measures of similarity. Broadly speaking, machine learning al-


gorithms which rely only on the dot product between instances can be “ker-
nelized” by replacing all instances of ⟨ x, xJ⟩ by a kernel function k(x, xJ).
We saw examples of such algorithms in Sections 1.3.3 and 1.3.4 and we will
see many more examples in Chapter 7. Arguably, the design of a good ker-
nel underlies the success of machine learning in many applications. In this
chapter we will lay the ground for the theoretical properties of kernels and
present a number of examples. Algorithms which use these kernels can be
found in later chapters.

6.1 The Basics


Let X denote the space of inputs and k : X × X → R be a function which
satisfies

J
k(x, x ) = ⟨ Φ(x), Φ(x)⟩ (6.1)

where Φ is a feature map which maps X into some dot product space H. In
other words, kernels correspond to dot products in some dot product space.
The main advantage of using a kernel as a similarity measure are threefold:
First, if the feature space is rich enough, then simple estimators such as
hyperplanes and half-spaces may be sufficient. For instance, to classify the
points in Figure BUGBUG, we need a nonlinear decision boundary, but once
we map the points to a 3 dimensional space a hyperplane suffices. Second,
kernels allow us to construct machine learning algorithms in the dot
product space H without explicitly computing Φ(x). Third, we need not make
any assumptions about the input space X other than for it to be aset. As
we will see later in this chapter, this allows us to compute similarity between
discrete objects such as strings, trees, and graphs. In the first half of this
chapter we will present some examples of kernels, and discuss some
theoretical properties of kernels in the second half.

155
156 6 Kernels and Function Spaces

6.1.1 Examples
6.1.1.1 Linear Kernel
Linear kernels are perhaps the simplest of all kernels. We assume that x ∈ Rn
and define
J J
Σ J
k(x, x ) = x, x = xix i.
i
J
If x and x are dense then computing the kernel takes O(n) time. On the other
hand, for sparse vectors this can be reduced to O(|nnz(x) ∩ nnz(xJ)|), where nnz(·)
denotes the set of non-zero indices of a vector and | · | de- notes the size of a
set. Linear kernels are a natural representation to use for vectorial data. They are
also widely used in text mining where documents are represented by a vector
containing the frequency of occurrence of words (Recall that we encountered this
so-called bag of words representation in Chapter 1). Instead of a simple bag of
words, one can also map a text to theset of pairs of words that co-occur in a
sentence for a richer representation.

6.1.1.2 Polynomial Kernel


Given x ∈ Rn, we can compute a feature map Φ by taking all the d-th order
products (also called the monomials) of the entries of x. To illustrate with a
concrete example, let us consider x = (x1, x2) and d = 2, in whichcase Φ(x)
= x2, x2, x1x2, x2x11 . Although
2 it is tedious to compute Φ(x)
J
and Φ(x ) explicitly in order to compute k(x, x), there is a shortcut as the
following proposition shows.

Proposition 6.1 Let Φ(x) (resp. Φ(xJ)) denote the vector whose entries
are all possible d-th degree ordered products of the entries of x (resp. x J).
Then
d
k(x, xJ) = Φ(x), Φ(xJ) = x, xJ . (6.2)

Proof By direct computation


Σ Σ
xj1 . . . xjd · xj1 . . . xJjd
J J
Φ(x), Φ(x ) = ...
j1 jd
d
Σ Σ Σ
xj1 · x j1 . . . x j d · x jd = xj · x j
J J J
=
j1 jd j

d
= x, xJ
6.1 The Basics 157

The kernel (6.2) is called the polynomial kernel. An useful extension is the
inhomogeneous polynomial kernel
J J d
k(x, x ) = x, x +c , (6.3)

which computes all monomials up to degree d (problem 6.2).

6.1.1.3 Radial Basis Function Kernels


6.1.1.4 Convolution Kernels
The framework of convolution kernels is a general way to extend the notion
of kernels to structured objects such as strings, trees, and graphs. Let x ∈ X
be a discrete object which can be decomposed into P parts xp ∈ Xp in many
different ways. As a concrete example consider the string x = abc which can
be split into two sets of substrings of size two namely {a, bc} and {ab, c}.We
denote the set of all such decompositions as R(x), and use it to build a
kernel on X as follows:

J
Σ Y
P
J
[k1 * . . . * kP ] (x, x ) = kp (x̄p , x̄p ). (6.4)
x̄∈ R(x),x̄′ ∈ R(x′ ) p=1

Here, the sum is over all possible ways in which we can decompose x and
xJ into x̄1 , . . . , x̄P and x̄J1 , . . . , x̄JP respectively. If the cardinality of R(x) is

finite, then it can be shown that (6.4) results in a valid kernel. Although
convolution kernels provide the abstract framework, specific instantiations of
this idea lead to a rich set of kernels on discrete objects. We will now discuss
some of them in detail.

6.1.1.5 String Kernels


The basic idea behind string kernels is simple: Compare the strings by means
of the subsequences they contain. More the number of common sub- sequences,
the more similar two strings are. The subsequences need not have equal weights.
For instance, the weight of a subsequence may be given by the inverse frequency
of its occurrence. Similarly, if the first and last charactersof a subsequence
are rather far apart, then its contribution to the kernel must be down-
weighted.
Formally, a string x is composed of characters from a finite alphabet Σ
and |x| denotes its length. We say that s is a subsequence of x = x1 x2 . . . x|x|
if s = xi1 xi2 . . . xi|s| for some 1 ≤ i1 < i2 < . . . < i|s| ≤ |x|. In particular, if
ii+1 = ii +1 then s is a substring of x. For example, acb is not a subsequence
of adbc while abc is a subsequence and adc is a substring. Assume that there
exists a function #(x, s) which returns the number of times a subsequence
158 6 Kernels and Function Spaces

s occurs in x and a non-negative weighting function w(s) ≥ 0 which returns


the weight associated with s. Then the basic string kernel can be written as
Σ
k(x, xJ) = #(x, s) #(xJ, s) w(s). (6.5)
s

Different string kernels are derived by specializing the above equation:


All substrings kernel: If we restrict the summation in (6.5) to sub-
strings then [VS04] provide a suffix tree based algorithm which allows one
to compute for arbitrary w(s) the kernel k(x, xJ) in O(|x| + |xJ|) time and
memory.
k-Spectrum kernel: The k-spectrum kernel is obtained by restricting the
summation in (6.5) to substrings of length k. A slightly general variant
considers all substrings of length up to k. Here k is a tuning parameter
which is typically set to be a small number (e.g., 5). A simple trie based
algorithm can be used to compute the k-spectrum kernel in O((|x| + |xJ|)k)
time (problem 6.3).
Inexact substring kernel: Sometimes the input strings might have
measurement errors and therefore it is desirable to take into account inexact
matches. This is done by replacing #(x, s) in (6.5) by another function #(x,
s, ϵ) which reports the number of approximate matches of s in x. Hereϵ
denotes the number of mismatches allowed, typically a small number (e.g.,3).
By trading off computational complexity with storage the kernel can be
computed efficiently. See [LK03] for details.
Mismatch kernel: Instead of simply counting the number of occurrences of
a substring if we use a weighting scheme which down-weights the contribu-
tions of longer subsequences then this yields the so-called mismatch kernel.
Given an index sequence I = (i1, . . . , ik) with 1 ≤ i1 < i2 < . . . < ik ≤ |x|we
can associate the subsequence x(I) = xi1 xi2 . . . xik with I. Furthermore, define
|I| = ik − i1 + 1. Clearly, |I| > k if I is not contiguous. Let λ ≤ 1 be
a decay factor. Redefine
Σ ||
#(x, s) = λI, (6.6)
s=x(I)

that is, we count all occurrences of s in x but now the weight associated with
a subsequence depends on its length. To illustrate, consider the subsequence abc
which occurs in the string abcebc twice, namely, abcebc and abcebc. Thefirst
occurrence is counted with weight λ3 while the second occurrence is counted
with the weight λ6. As it turns out, this kernel can be computedby a
dynamic programming algorithm (problem BUGBUG) in O(|x| · |xJ|) time.
6.1 The Basics 159

6.1.1.6 Graph Kernels


There are two different notions of graph kernels. First, kernels on graphs are
used to compare nodes of a single graph. In contrast, kernels between graphs
focus on comparing two graphs. A random walk (or its continuous time limit,
diffusion) underlie both types of kernels. The basic intuition is that two nodes
are similar if there are a number of paths which connect them while two
graphs are similar if they share many common paths. To describe these
kernels formally we need to introduce some notation.
A graph G consists of an ordered set of n vertices V = {v1, v2, . . . , vn},and
a set of directed edges E ⊂ V ×V . A vertex vi is said to be a neighborof
another vertex vj if they are connected by an edge, i.e., if (vi, vj) ∈ E;this
is also denoted vi ∼ vj. The adjacency matrix of a graph is the n × n matrix
A with Aij = 1 if vi ∼ vj, and 0 otherwise. A walk of length k on G is a
sequence of indices i0, i1, . . . ik such that vir−1 ∼ vir for all 1 ≤ r ≤ k.

The adjacency matrix has a normalized cousin, defined à := D 1 A, which
has the property that each of its rows sums to one, and it can therefore
serve as the transition matrix for a stochastic process. Here, D is a diag-
Σ
onal matrix of node degrees, i.e., Dii = di = j Aij. A random walk on
G is a process generating sequences of vertices vi1 , vi2 , vi3 , . . . according to
P(ik+1 |i1 , . . . ik ) = Ãik ,i k+1 . The tth power of à thus describes t-length walks,

i.e., (Ãt )ij is the probability of a transition from vertex vj to vertex vi via
a walk of length t (problem BUGBUG). If p0 is an initial probability dis-
tribution over vertices, then the probability distribution pt describing the
location of our random walker at time t is pt = Ãt p0 . The j th component of
pt denotes the probability of finishing a t-length walk at vertex vj. A random
walk need not continue indefinitely; to model this, we associate every node
vik in the graph with a stopping probability qik . The overall probability of
T
stopping after t steps is given by q pt .
J J J
Given two graphs G(V, E) and G (V , E ), their direct product G× is a
graph with vertex set

V× = {(vi , vrJ ) : vi ∈ V, vrJ ∈ V J }, (6.7)

and edge set

E× = {((vi , vrJ ), (vj , vsJ )) : (vi , vj ) ∈ E ∧ (vrJ , vsJ ) ∈ E }.


J
(6.8)
J
In other words, G× is a graph over pairs of vertices from G and G , and
two vertices in G× are neighbors if and only if the corresponding vertices
in G and GJ are both neighbors; see Figure 6.1 for an illustration. If A and
AJ are the respective adjacency matrices of G and GJ, then the adjacency
160 6 Kernels and Function Spaces
11’ 21’ 31’

34’ 12’
G1
24’ 22’
1’ 2’
14’ 32’

4’ 3’ 33’ 23’ 13’


G2 G×

Fig. 6.1. Two graphs (G1 & G2) and their direct product (G×). Each node of the
direct product graph is labeled with a pair of nodes (6.7); an edge exists in the
direct product if and only if the corresponding nodes are adjacent in both original
graphs (6.8). For instance, nodes 11′ and 32′ are adjacent because there is an edge
between nodes 1 and 3 in the first, and 1′ and 2′ in the second graph.

matrix of G× is A× = A ⊗ AJ . Similarly, Ã × = Ã ⊗ ÃJ . Performing a random


walk on the direct product graph is equivalent to performing a simultaneous
J J
random walk on G and G . If p and p denote initial probability distributions
J
over the vertices of G and G , then the corresponding initial probability
distribution on the direct product graph is p× := p ⊗ pJ. Likewise, if q and qJ
are stopping probabilities (that is, the probability that a random walk ends at
a given vertex), then the stopping probability on the direct product graph is
q× := q ⊗ qJ.
J
To define a kernel which computes the similarity between G and G , one
natural idea is to simply sum up qT Ãt p× for all values of t. However, this
× ×
sum might not converge, leaving the kernel value undefined. To overcome
this problem, we introduce appropriately chosen non-negative coefficients
µ(t), and define the kernel between G and GJ as
Σ

t
k(G, GJ ) := µ(t) q T
× Ã × p× . (6.9)
t=0

This idea can be extended to graphs whose nodes are associated with labels
by replacing the matrix Ã × with a matrix of label similarities. For appro-

priate choices of µ(t) the above sum converges and efficient algorithms for
computing the kernel can be devised. See [?] for details.
As it turns out, the simple idea of performing a random walk on the prod-
6.2 Kernels 161

uct graph can be extended to compute kernels on Auto Regressive Moving


Average (ARMA) models [VSV07]. Similarly, it can also be used to define
kernels between transducers. Connections between the so-called rational ker-
nels on transducers and the graph kernels defined via (6.9) are made explicit
in [?].

6.2 Kernels
6.2.1 Feature Maps
give examples, linear classifier, nonlinear ones with r2-r3 map

6.2.2 The Kernel Trick


6.2.3 Examples of Kernels
gaussian, polynomial, linear, texts, graphs
- stress the fact that there is a difference between structure in the input
space and structure in the output space

6.3 Algorithms
6.3.1 Kernel Perceptron
6.3.2 Trivial Classifier
6.3.3 Kernel Principal Component Analysis
6.4 Reproducing Kernel Hilbert Spaces
As it turns out, this class of functions coincides with the class of positive semi-
definite functions. Intuitively, the notion of a positive semi-definite function is an
extension of the familiar notion of a positive semi-definite matrix (also see
Appendix BUGBUG):

Definition 6.2 A real n × n symmetric matrix K satisfying


Σ
αiαjKi,j ≥ 0 (6.10)
i,j

for all αi, αj ∈ R is called positive semi-definite. If equality in (6.10) occurs


only when α1, . . . , αn = 0, then K is said to be positive definite.

Definition 6.3 Given a set of points x1, . . . , xn ∈ X and a function k, the


matrix
Ki,j = k(xi, xj) (6.11)
162 6 Kernels and Function Spaces

is called the Gram matrix or the kernel matrix of k with respect to x1, . . . , xn.

Definition 6.4 Let X be a nonempty set, k : X × X → R be a function. If


k gives rise to a positive (semi-)definite Gram matrix for all x1, . . . , xn ∈ X and
n ∈ N then k is said to be positive (semi-)definite.
Clearly, every kernel function k of the form (6.1) is positive semi-definite.
To see this simply write
* +
Σ Σ Σ Σ
αiαjk(xi, xj) = αiαj ⟨ xi, xj⟩ = αixi, αjxj ≥ 0.
i,j i,j i j

We now establish the converse, that is, we show that every positive semi-
definite kernel function can be written as (6.1). Towards this end, define a
map Φ from X into the space of functions mapping X to R (denoted RX) via
Φ(x) = k(·, x). In other words, Φ(x) : X → R is a function which assigns the
value k(x , x) to x ∈ X. Next construct a vector space by taking all possible
J J

linear combinations of Φ(x)


n n
Σ Σ
f (·) = αiΦ(xi) = αik(·, xi), (6.12)
i=1 i=1
where i ∈ N, αi ∈ R, and xi ∈ X are arbitrary. This space can be endowed
with a natural dot product
Σ n Σ
n′
J
⟨ f, g⟩ = αiβjk(xi, x j ). (6.13)
i=1 j=1

To see that the above dot product is well defined even though it contains
the expansion coefficients (which need not be unique), note that ⟨ f, g⟩ =
Σ n′ J
Σn
j independent of αi. Similarly, for g, note that ⟨ f, g⟩ = i=1 αif (xi),
j=1 βjf (x ),
this time independent of βj. This also shows that ⟨ f, g⟩ is bilinear. Symme-
try follows because ⟨ f, g⟩ = ⟨ g, f ⟩ , while the positive semi-definiteness of k
implies that
Σ
⟨ f, f ⟩ = αiαjk(xi, xj) ≥ 0. (6.14)
i,j

Applying (6.13) shows that for all functions (6.12) we have


⟨ f, k(·, x)⟩ = f (x). (6.15)
In particular
k(·, x), k(·, xJ) = k(x, xJ). (6.16)
6.4 Reproducing Kernel Hilbert Spaces 163

In view of these properties, k is called a reproducing kernel. By using (6.15)


and the following property of positive semi-definite functions (problem 6.1)

k(x, xJ)2 ≤ k(x, x) · k(xJ, xJ) (6.17)

we can now write

|f (x)| 2 = | ⟨ f, k(·, x)⟩ | ≤ k(x, x) · ⟨ f, f ⟩ . (6.18)

From the above inequality, f = 0 whenever ⟨ f, f ⟩ = 0, thus establishing


⟨ ·, ·⟩ as a valid dot product. In fact, one can complete the space of functions
(6.12) in the norm corresponding to the dot product (6.13), and thus get a
Hilbert space H, called the reproducing kernel Hilbert Space (RKHS).
An alternate way to define a RKHS is as a Hilbert space H on functions
from some input space X to R with the property that for any f ∈ H and
x ∈ X, the point evaluations f → f (x) are continuous (in particular, all points
values f (x) are well defined, which already distinguishes an RKHS from many
L2 Hilbert spaces). Given the point evaluation functional, one can then
construct the reproducing kernel using the Riesz representation theorem.
The Moore-Aronszajn theorem states that, for every positive semi-definite
kernel on X × X, there exists a unique RKHS and vice versa.
We finish this section by noting that ⟨ ·, ·⟩ is a positive semi-definite func-
tion in the vector space of functions (6.12). This follows directly from the
bilinearity of the dot product and (6.14) by which we can write for functions
f1, . . . , fp and coefficients γ1, . . . , γp
* +
ΣΣ Σ Σ
γiγj ⟨ fi, fj⟩ = γifi, γjfj ≥ 0. (6.19)
i j i j

6.4.1 Hilbert Spaces


evaluation functionals, inner products

6.4.2 Theoretical Properties


Mercer’s theorem, positive semidefiniteness

6.4.3 Regularization
Representer theorem, regularization
164 6 Kernels and Function Spaces

6.5 Banach Spaces


6.5.1 Properties
6.5.2 Norms and Convex Sets
- smoothest function (L2) - smallest coefficients (L1) - structured priors
(CAP formalism)

Problems
Problem 6.1 Show that (6.17) holds for an arbitrary positive semi-definite
function k.

Problem 6.2 Show that the inhomogeneous polynomial kernel (6.3) is a


valid kernel and that it computes all monomials of degree up to d.

Problem 6.3 (k-spectrum kernel {2}) Given two strings x and x show
J

how one can compute the k-spectrum kernel (section 6.1.1.5) in O((|x| +
|xJ|)k) time. Hint: You need to use a trie.
7

Linear Models

A hyperplane in a space H endowed with a dot product ⟨ ·, ·⟩ is described


bythe set
{x ∈ H| ⟨ w, x⟩ + b = 0} (7.1)
where w ∈ H and b ∈ R. Such a hyperplane naturally divides H into two half-
spaces: {x ∈ H| ⟨ w, x⟩ + b ≥ 0} and {x ∈ H| ⟨ w, x⟩ + b < 0}, and hence
can be used as the decision boundary of a binary classifier. In this chapter we
will study a number of algorithms which employ such linear decision
boundaries. Although such models look restrictive at first glance, when
combined with kernels (Chapter 6) they yield a large class of useful
algorithms.
All the algorithms we will study in this chapter maximize the margin.
Given a set X = {x1, . . . , xm}, the margin is the distance of the closest point
in X to the hyperplane (7.1). Elementary geometric arguments (Problem 7.1)
show that the distance of a point xi to a hyperplane is given by | ⟨ w, xi⟩
+b |/ ǁwǁ, and hence the margin is simply
| ⟨ w, xi⟩ + b |
min . (7.2)
i=1,...,m ǁwǁ

Note that the parameterization of the hyperplane (7.1) is not unique; if we


multiply both w and b by the same non-zero constant, then we obtain the
same hyperplane. One way to resolve this ambiguity is to set
min | ⟨ w, xi⟩ + b| = 1.
i=1,...m

In this case, the margin simply becomes 1/ǁwǁ. We postpone justification


of margin maximization for later and jump straight ahead to the description
of various algorithms.

7.1 Support Vector Classification


Consider a binary classification task, where we are given a training set
{(x1, y1), . . . , (x m, ym)} with xi ∈ H and yi ∈ {±1}. Our aim is to finda
linear decision boundary parameterized by (w, b) such that ⟨ w, xi⟩ + b ≥ 0

165
166 7 Linear Models

Fig. 7.1. A linearly separable toy binary classification problem of separating the
diamonds from the circles. We normalize (w, b) to ensure that mini=1,...m | ⟨ w, xi⟩
| = 1. In this case, the margin is given by ǁwǁ
+b 1
as the calculation in the inset shows.

whenever yi = +1 and ⟨ w, xi ⟩ +b < 0 whenever yi = −1. Furthermore, as dis-


cussed above, we fix the scaling of w by requiring mini=1,...m | ⟨ w, xi ⟩ +b | =
1. A compact way to write our desiderata is to require yi(⟨ w, xi⟩ + b) ≥ 1
for all i (also see Figure 7.1). The problem of maximizing the margin
therefore reduces to
1
max (7.3a)
w,b ǁwǁ

s.t. yi(⟨ w, xi⟩ + b) ≥ 1 for all i, (7.3b)

or equivalently

1 2
min ǁwǁ (7.4a)
w,b 2

s.t. yi(⟨ w, xi⟩ + b) ≥ 1 for all i. (7.4b)

This is a constrained convex optimization problem with a quadratic objec- tive


function and linear constraints (see Section 3.3). In deriving (7.4) we implicitly
assumed that the data is linearly separable, that is, there is a hyperplane
which correctly classifies the training data. Such a classifier is called a hard
margin classifier. If the data is not linearly separable, then (7.4) does not
have a solution. To deal with this situation we introduce
7.1 Support Vector Classification 167

non-negative slack variables ξi to relax the constraints:

yi(⟨ w, xi⟩ + b) ≥ 1 − ξi.

Given any w and b the constraints can now be satisfied by making ξi large
enough. This renders the whole optimization problem useless. Therefore, one
has to penalize large ξi. This is done via the following modified optimization
problem:
1 C Σ
m
min ǁwǁ2 + ξi (7.5a)

w,b,ξ 2 m
i=1

s.t. yi(⟨ w, xi⟩ + b) ≥ 1 − ξi for all i (7.5b)


ξi ≥ 0, (7.5c)

where C > 0 is a penalty parameter. The resultant classifier is said to be a


soft margin classifier. By introducing non-negative Lagrange multipliers αi
and βi one can write the Lagrangian (see Section 3.3)

CΣ Σ mΣ
m m
1
L(w, b, ξ, α, β) = ǁwǁ2 + ξi + α (1 − ξ — y (⟨ w, x ⟩ + b)) − β ξ .
2 m i=1 i=1 i=1
i i i i i i

Next take gradients with respect to w, b and ξ and set them to zero.
Σ
m
∇ wL = w − αiyixi = 0 (7.6a)
i=1
Σ
m
∇ bL = − αiyi = 0 (7.6b)
i=1
C
∇ ξi L = − αi − βi = 0. (7.6c)
m
Substituting (7.6) into the Lagrangian and simplifying yields the dual ob-
jective function:

1Σ Σ m
− i jα iα j ⟨ xi , xj ⟩ +
yy αi , (7.7)
2
i,j i=1

which needs to be maximized with respect to α. For notational convenience


we will minimize the negative of (7.7) below. Next we turn our attention
to the dual constraints. Recall that αi ≥ 0 and βi ≥ 0, which in conjunc- tion
with (7.6c) immediately yields 0 ≤ αi ≤ C . Furthermore, by (7.6b)
Σm m
i=1 αiyi = 0. Putting everything together, the dual optimization problem
168 7 Linear Models

boils down to
1Σ Σ
min y y α α ⟨ x ,x ⟩ − (7.8a)
α
m

α 2 i j i j i j i
i,j i=1
Σ
m

s.t. αiyi = 0 (7.8b)


i=1
C
0 ≤ αi ≤. (7.8c)
m
If we let H be a m × m matrix with entries Hij = yiyj ⟨ xi, xj⟩ , while e, α,
and y be m-dimensional vectors whose i-th components are one, αi, and yi
respectively, then the above dual can be compactly written as the following
Quadratic Program (QP) (Section 3.3.3):
1 T T
min α H α − α e (7.9a)
α 2

s.t. αT y = 0 (7.9b)
C
0 ≤ αi ≤ . (7.9c)
m
Before turning our attention to algorithms for solving (7.9), a number of
observations are in order. First, note that computing H only requires com-
puting dot products between training examples. If we map the input data to
a Reproducing Kernel Hilbert Space (RKHS) via a feature map φ, then we
can still compute the entries of H and solve for the optimal α. In this case,
Hij = yi yj ⟨ φ(xi ), φ(xj )⟩ = yi yj k(xi , xj ), where k is the kernel associated
with the RKHS. Given the optimal α, one can easily recover the decision
boundary. This is a direct consequence of (7.6a), which allows us to write w
as a linear combination of the training data:
Σ
m
w= αiyiφ(xi),
i=1

and hence the decision boundary as


Σ
m
⟨ w, x⟩ + b = αiyik(xi, x) + b. (7.10)
i=1

By the KKT conditions (Section 3.3) we have


αi(1 − ξi − yi(⟨ w, xi⟩ + b)) = 0 and βiξi = 0.
We now consider three cases for yi(⟨ w, xi⟩ + b) and the implications of the
KKT conditions (see Figure 7.2).
7.1 Support Vector Classification 169

Fig. 7.2. The picture depicts the well classified points (yi⟨( w, xi + b) > 1 in black,
the support vectors yi(⟨w, xi ⟩+ b) = 1 in blue, and margin errors yi( w,
⟨ xi + b) < 1 in
red. ⟩ ⟩

yi(⟨ w, xi⟩ + b) < 1: In this case, ξi > 0, and hence the KKT conditions
imply that βi = 0. Consequently, αi = Cm(see (7.6c)). Such points
are said to be margin errors.
yi(⟨ w, xi⟩ + b) > 1: In this case, ξi = 0, (1− ξi − yi(⟨ w, xi⟩ +b)) < 0, and
by the KKT conditions αi = 0. Such points are said to be well
classified. It is easy to see that the decision boundary (7.10) does not
change even if these points are removed from the training set.
yi(⟨ w, xi⟩ + b) = 1: In this case ξi = 0 and βi ≥ 0. Since αi is non-negative
and satisfies (7.6c) it follows that 0 ≤ αi ≤ C .mSuch points are said to
be on the margin. They are also sometimes called support vectors.

Since the support vectors satisfy yi(⟨ w, xi⟩ + b) = 1 and yi ∈ {±1} it follows
that b = yi − ⟨ w, xi⟩ for any support vector xi. However, in practice to recover
b we average
Σ
b = yi − ⟨ w, xi⟩ . (7.11)
i

over all support vectors, that is, points xi for which 0 < αi < C m
. Because it
uses support vectors, the overall algorithm is called C-Support Vector
classifier or C-SV classifier for short.
170 7 Linear Models

7.1.1 A Regularized Risk Minimization Viewpoint


A closer examination of (7.5) reveals that ξi = 0 whenever yi (⟨ w, xi ⟩ +b) >
1. On the other hand, ξi = 1 − yi(⟨ w, xi⟩ + b) whenever yi(⟨ w, xi⟩ + b) <
1. In short, ξi = max(0, 1 − yi(⟨ w, xi⟩ + b)). Using this observation one
can eliminate ξi from (7.5), and write it as the following unconstrained
optimization problem:
1 C Σm
min ǁwǁ2 + max(0, 1 − y (⟨
i w, x i⟩ + b)). (7.12)
w,b 2 m i=1

Writing (7.5) as (7.12) is particularly revealing because it shows that a support


vector classifier is nothing but a regularized risk minimizer. Here the
regularizer is the square norm of the decision hyperplane 1 ǁwǁ2, and2
the loss function is the so-called binary hinge loss (Figure 7.3):
l(w, x, y) = max(0, 1 − y(⟨ w, x⟩ + b)). (7.13)
It is easy to verify that the binary hinge loss (7.13) is convex but non-
differentiable (see Figure 7.3) which renders the overall objective function
(7.12) to be convex but non-smooth. There are two different strategies to
minimize such an objective function. If minimizing (7.12) in the primal, one
can employ non-smooth convex optimizers such as bundle methods (Section
3.2.7). This yields a d dimensional problem where d is the dimension of x.On
the other hand, since (7.12) is strongly convex because of the presenceof
the 1 ǁwǁ2
2
term, its Fenchel dual has a Lipschitz continuous gradient (see
Lemma 3.10). The dual problem is m dimensional and contains linear
constraints. This strategy is particularly attractive when the kernel trick is
used or whenever d m. In fact, the dual problem obtained via Fenchel duality
is very related to the Quadratic programming problem (7.9) obtained via
Lagrange duality (problem 7.4).

7.1.2 An Exponential Family Interpretation


Our motivating arguments for deriving the SVM algorithm have largely been
geometric. We now show that an equally elegant probabilistic interpre-tation
also exists. Assuming that the training set {(x1, y1), . . . , (xm, ym)} was drawn
iid from some underlying distribution, and using the Bayes rule (1.15) one can
write the likelihood
Y
m
p(θ|X, Y ) ∝ p(θ)p(Y |X, θ) = p(θ) p(yi |x i , θ), (7.14)
i=1
7.1 Support Vector Classification 171

loss

y(⟨ w, x⟩ + b)

Fig. 7.3. The binary hinge loss. Note that the loss is convex but non-differentiable
at the kink point. Furthermore, it increases linearly as the distance from the decision
hyperplane y(⟨ w, x⟩ + b) decreases.

and hence the negative log-likelihood


Σ
m
— log p(θ|X, Y ) = − log p(yi |xi , θ) − log p(θ) + const. (7.15)
i=1

In the absence of any prior knowledge about the data, we choose a zero mean
unit variance isotropic normal distribution for p(θ). This yields
1 Σ m
— log p(θ|X, Y ) = ǁθǁ2 − log p(y |xi , iθ) + const. (7.16)
2 i=1

The maximum aposteriori (MAP) estimate for θ is obtained by minimizing


(7.16) with respect to θ. Given the optimal θ, we can predict the class label
at any given x via
y∗ = argmax p(y x,
| θ). (7.17)
y

Of course, our aim is not just to maximize p(yi |xi , θ) but also to ensure
that p(y|xi , θ) is small for all y yi. This, for instance, can be achieved by
requiring

p(yi|xi, θ)
≥ η, for all y yi and some η ≥ 1. (7.18)
p(y|xi , θ)

As we saw in Section 2.3 exponential families of distributions are rather flex-


ible modeling tools. We could, for instance, model p(yi |xi , θ) as a conditional
exponential family distribution. Recall the definition:
p(y|x, θ) = exp (⟨ φ(x, y), θ⟩ − g(θ|x)) . (7.19)
172 7 Linear Models

Here φ(x, y) is a joint feature map which depends on both the input data xand
the label y, while g(θ|x) is the log-partition function. Now (7.18) boils down
to
p(yi|xi, θ)
= exp φ(x , y ) − max φ(x , y), θ ≥ η. (7.20)
i i i
maxy yi p(y|xi , θ) y yi

If we choose η such that log η = 1, set φ(x, y) = y2φ(x), and observe that
y ∈ {±1} we can rewrite (7.20) as
Dy y E
i
φ(x i) − − i φ(x ),i θ = y i ⟨ φ(xi ), θ⟩ ≥ 1. (7.21)
2 2

By replacing − log p(yi |xi , θ) in (7.16) with the condition (7.21) we obtain
the following objective function:
1 2
min ǁθǁ (7.22a)
θ 2

s.t. yi ⟨ φ(xi), θ⟩ ≥ 1 for all i, (7.22b)

which recovers (7.4), but without the bias b. The prediction function is
recovered by noting that (7.17) specializes to
∗ y
y = argmax ⟨ φ(x, y), θ⟩ = argmax ⟨ φ(x), θ⟩ = sign(⟨ φ(x), θ⟩ ). (7.23)
y∈ {± 1} y∈ {± 1} 2

As before, we can replace (7.21) by a linear penalty for constraint viola-


p(yi |xi ,θ)
tion in order to recover (7.5). The quantity log is sometimes
max y y
i
p(y|xi ,θ)
called the log-odds ratio, and the above discussion shows that SVMs can
be interpreted as maximizing the log-odds ratio in the exponential family. This
interpretation will be developed further when we consider extensions of SVMs
to tackle multiclass, multilabel, and structured prediction problems.

7.1.3 Specialized Algorithms for Training SVMs


The main task in training SVMs boils down to solving (7.9). The m × m
matrix H is usually dense and cannot be stored in memory. Decomposition
methods are designed to overcome these difficulties. The basic idea here
is to identify and update a small working set B by solving a small sub-
problem at every iteration. Formally, let B ⊂ {1, . . . , m} be the working set
and αB be the corresponding sub-vector of α. Define B̄ = {1, . . . , m} \ B
and αB̄ analogously. In order to update αB we need to solve the following
7.1 Support Vector Classification 173
sub-problem of (7.9) obtained by freezing αB̄ :

1 HBB HBB̄ αB
min αT αT
B̄ −
T
α B αB̄
T
e (7.24a)
αB αB̄
B
2 HB̄B HB̄B̄
s.t. αT
B αT
¯
B y=0 (7.24b)
C

0 ≤ αi ≤ for all i ∈ B. (7.24c)


m
HBB HBB̄
Here, is a permutation of the matrix H. By eliminating
HB̄B HB̄B̄

constant terms and rearranging, one can simplify the above problem to
1
αT H α + α (H ¯ α ¯ − e)
T
min (7.25a)
αB
BB B BB B
2 B B

s.t. αT T
B yB = −αB
¯ yB̄ (7.25b)
C

0 ≤ αi ≤ for all i ∈ B. (7.25c)


m
An extreme case of a decomposition method is the Sequential Minimal Op-
timization (SMO) algorithm of Platt [Pla99], which updates only two coef-
ficients per iteration. The advantage of this strategy as we will see below is
that the resultant sub-problem can be solved analytically. Without loss of
generality let B = {i, j}, and define s = yi /yj , ci cj = (HB̄B α B̄ − e)T
and d = (−αT B¯ yB̄ /yj ). Then (7.25) specializes to
1
min (H α2 + H α2 + 2H α α ) + c α + c α (7.26a)
ii i jj j ij j i i i j j
αi ,α j 2
s.t. sαi + αj = d (7.26b)
C
0 ≤ αi, αj ≤ . (7.26c)
m
This QP in two variables has an analytic solution.

Lemma 7.1 (Analytic solution of 2 variable QP) Define bounds


(
C
d−
L= max(0, m
s ) if s > 0 (7.27)

max(0, ds) otherwise


(
min( C , d ) if s > 0
m s s
H= C d− m
C (7.28)
min( , ) otherwise,
174 7 Linear Models

and auxiliary variables


χ = (Hii + Hjjs2 − 2sHij) and (7.29)
ρ = (cjs − ci − Hijd + Hjjds). (7.30)
The optimal value of (7.26) can be computed analytically as follows: If χ = 0
then
(
L if ρ < 0
αi =
H otherwise.
If χ > 0, then αi = max(L, min(H, ρ/χ)). In both cases, αj = (d − sαi).
Proof Eliminate the equality constraint by setting αj = (d − sαi). Due to
the constraint 0 ≤ αj ≤ Cm it follows that sαi = d − αj can be bounded
via d − C ≤ sαi ≤ d. Combining this with 0 ≤ αi ≤ C one can write
m m
L ≤ αi ≤ H where L and H are given by (7.27) and (7.28) respectively.
Substituting αj = (d−sαi ) into the objective function, dropping the terms
which do not depend on αi, and simplifying by substituting χ and ρ yields
the following optimization problem in αi:
1
min α2χ − α ρ
αi
i
2 i
s.t. L ≤ αi ≤ H.
First consider the case when χ = 0. In this case, αi = L if ρ < 0 otherwise
αi = H. On other hand, if χ > 0 then the unconstrained optimum of the
above optimization problem is given by ρ/χ. The constrained optimum is
obtained by clipping appropriately: max(L, min(H, ρ/χ)). This concludes the
proof.
To complete the description of SMO we need a valid stopping criterion as well
as a scheme for selecting the working set at every iteration. In order to
derive a stopping criterion we will use the KKT gap, that is, the extentto
which the KKT conditions are violated. Towards this end introduce non-
negative Lagrange multipliers b ∈ R, λ ∈ Rm and µ ∈ Rm and write the
Lagrangian of (7.9).
C
L(α, b, λ, µ) = 1 αT Hα − αT e + bαT y − λT α + µT (α − e). (7.31)
2 m

If we let J(α) = 21 αT H α − αT e be the objective function and ∇ J(α) =


Hα − e its gradient, then taking gradient of the Lagrangian with respect to
α and setting it to 0 shows that
∇J(α) + by = λ − µ. (7.32)
7.1 Support Vector Classification 175

Furthermore, by the KKT conditions we have


C
− αi) = 0,
λiαi = 0 and µi( (7.33)
m
with λi ≥ 0 and µi ≥ 0. Equations (7.32) and (7.33) can be compactly
rewritten as

∇J(α)i + byi ≥ 0 if αi = 0 (7.34a)


C
∇J(α)i + byi ≤ 0 if αi = (7.34b)
m
C
∇J(α)i + byi = 0 if 0 < αi < . (7.34c)
m
Since yi ∈ {±1}, we can further rewrite (7.34) as

−yi ∇ J(α)i ≤ b for all i ∈ Iup


−yi ∇ J(α)i ≥ b for all i ∈ Idown ,

where the index sets Iup and Idown are defined as


C
Iup = {i : αi <, yi = 1 or αi > 0, yi = −1} (7.35a)
m
C
Idown = {i : αi < , yi = −1 or αi > 0, yi = 1}. (7.35b)
m
In summary, the KKT conditions imply that α is a solution of (7.9) if and
only if

m(α) ≤ M (α)

where
m(α) = max −yi ∇J (α)i and M (α) = min −yi ∇ J(α)i . (7.36)

i∈ Iup i∈ I down

Therefore, a natural stopping criterion is to stop when the KKT gap falls
below a desired tolerance ϵ, that is,

m(α) ≤ M (α) + ϵ. (7.37)

Finally, we turn our attention to the issue of working set selection. The
first order approximation to the objective function J(α) can be written as

J(α + d) ≈ J (α) + ∇J (α)T d.

Since we are only interested in updating coefficients in the working set B


we set dT = dT B 0 , in which case we can rewrite the above first order
176 7 Linear Models

approximation as

B dB ≈ J(α + d) − J(α).
∇J (α)T

From among all possible directions dB we wish to choose one which decreases
the objective function the most while maintaining feasibility. This is best
expressed as the following optimization problem:

min ∇J (α)T
B dB (7.38a)
dB

T
s.t. yB dB = 0 (7.38b)
di ≥ 0 if αi = 0 and i ∈ B (7.38c)
C
di ≤ 0 if αi = and i ∈ B (7.38d)
m
— 1 ≤ d i ≤ 1. (7.38e)

Here (7.38b) comes from yT (α + d) = 0 and yT α = 0, while (7.38c) and


(7.38d) comes from 0 ≤ αi ≤ Cm. Finally, (7.38e) prevents the objective
function from diverging to −∞. If we specialize (7.38) to SMO, we obtain

min ∇J (α)i di + ∇ J(α)j dj (7.39a)


i,j

s.t. yi di + yj dj = 0 (7.39b)
dk ≥ 0 if αk = 0 and k ∈ {i, j} (7.39c)
C
dk ≤ 0 if αk = and k ∈ {i, j} (7.39d)
m
— 1 ≤ dk ≤ 1 for k ∈ {i, j}. (7.39e)

At first glance, it seems that choosing the optimal i and j from the set
{1, . . . , m}×{1, . . . m} requires O(m2) effort. We now show that O(m) effort
suffices.
Define new variables dˆk = yk dk for k ∈ {i, j}, and use the observation
yk ∈ {±1} to rewrite the objective function as
(−yi ∇J (α)i + yj ∇ J(α)j ) dˆj .

Consider the case −∇J (α)i yi ≥ −∇J(α)j yj . Because of the constraints


(7.39c) and (7.39d) if we choose i ∈ Iup and j ∈ Idown , then dˆj = −1 and
ˆ i = 1 is feasible and the objective function attains a negative value. For
d
all other choices of i and j (i, j ∈ Iup; i, j ∈ Idown; i ∈ Idown and j ∈ Iup) the
objective function value of 0 is attained by setting di = ˆdj = ˆ0. The case
−∇J (α)j yj ≥ −∇J(α)i yi is analogous. In summary, the optimization
7.2 Extensions 177

problem (7.39) boils down to

min yi∇J(α)i − yj∇J(α)j = min yi∇J(α)i − max yj∇J(α)j,


i∈ I up ,j∈ I down j
i∈ Iup ∈ I down

which clearly can be solved in O(m) time. Comparison with (7.36) shows
that at every iteration of SMO we choose to update coefficients α i and αj
which maximally violate the KKT conditions.

7.2 Extensions
7.2.1 The ν trick
In the soft margin formulation the parameter C is a trade-off between two
conflicting requirements namely maximizing the margin and minimizing the
training error. Unfortunately, this parameter is rather unintuitive and hence
difficult to tune. The ν-SVM was proposed to address this issue. As Theorem
7.3 below shows, ν controls the number of support vectors and margin errors.
The primal problem for the ν-SVM can be written as
1 1 Σm
min ǁwǁ 2 − ρ + ξi (7.40a)

w,b,ξ,ρ 2 νm
i=1

s.t. yi(⟨ w, xi⟩ + b) ≥ ρ − ξi for all i (7.40b)


ξi ≥ 0, and ρ ≥ 0. (7.40c)

As before, if we write the Lagrangian by introducing non-negative Lagrange


multipliers, take gradients with respect to the primal variables and set them
to zero, and substitute the result back into the Lagrangian we obtain the
following dual:

min 1 Σ y y α α ⟨ x , x ⟩ (7.41a)
α 2 i j i j i j
i,j
Σ
m

s.t. αiyi = 0 (7.41b)


i=1
Σm
αi ≥ 1 (7.41c)
i=1
1
0 ≤ αi ≤
. (7.41d)
νm
It turns out that the dual can be further simplified via the following lemma.
178 7 Linear Models

Lemma 7.2 Let ν ∈ [0, 1] and (7.41) be feasible. Then there is at least one
Σ
solution α which satisfies α = ii1. Furthermore, if the final objective valueof
Σ
(7.41) is non-zero then all solutions satisfy i αi = 1.

Proof The feasible region of (7.41) is bounded, therefore if it is feasible


then there exists an optimal solution. Let α denote this solution and assume
Σ
that i αi > 1. In this case we can define
1
ᾱ = Σ α,
j αj

and easily check that ᾱ is also feasible. As before, let H denote a m × m


matrix with Hij = yiyj ⟨ xi, xj⟩ . Since α is the optimal solution of (7.41) it
follows that
!2
1 T 1 T 1 1 1
Σ
α H α ≤ ᾱ H ᾱ = j αj 2αT Hα ≤ 2 αT Hα.
2 2

T
This implies that either 1 α Hα = 0, in which case ᾱ is an optimal solution
2
with the desired property or 1 αT H α /= 0, in which case all optimal solutions
Σ 2
satisfy i αi = 1.
In view of the above theorem one can equivalently replace (7.41) by the
following simplified optimization problem with two equality constraints
min 1 Σ y y α α ⟨ x , x ⟩ (7.42a)
α 2 i j i j i j
i,j
Σ
m

s.t. αiyi = 0 (7.42b)


i=1
Σm
αi = 1 (7.42c)
i=1
1
0 ≤ αi ≤
. (7.42d)
νm
The following theorems, which we state without proof, explain the signif-
icance of ν and the connection between ν-SVM and the soft margin formu- lation.

Theorem 7.3 Suppose we run ν-SVM with kernel k on some data and
obtain ρ > 0. Then
(i) ν is an upper bound on the fraction of margin errors, that is points
for which yi (⟨ w, xi ⟩ + bi ) < ρ.
7.2 Extensions 179

(ii) ν is a lower bound on the fraction of support vectors, that is points


for which yi (⟨ w, xi ⟩ + bi ) = ρ.
(iii) Suppose the data (X, Y ) were generated iid from a distribution p(x, y)
such that neither p(x, y = +1) or p(x, y = −1) contain any discrete
components. Moreover, assume that the kernel k is analytic and non-
constant. With probability 1, asympotically, ν equals both the fraction
of support vectors and fraction of margin errors.

Theorem 7.4 If (7.40) leads to a decision function with ρ > 0, then (7.5)
with C = 1ρ leads to the same decision function.

7.2.2 Squared Hinge Loss


In binary classification, the actual loss which one would like to minimize is
the so-called 0-1 loss
(
0 if y(⟨ w, x⟩ + b) ≥
l(w, x, y) = (7.43)
1
1 otherwise .

This loss is difficult to work with because it is non-convex (see Figure 7.4). In

loss

y(⟨ w, x⟩ + b)

Fig. 7.4. The 0-1 loss which is non-convex and intractable is depicted in red. The hinge
loss is a convex upper bound to the 0-1 loss and shown in blue. The square hinge loss
is a differentiable convex upper bound to the 0-1 loss and is depicted in green.

fact, it has been shown that finding the optimal (w, b) pair which minimizes
the 0-1 loss on a training dataset of m labeled points is NP hard [BDEL03].
Therefore various proxy functions such as the binary hinge loss (7.13) which
we discussed in Section 7.1.1 are used. Another popular proxy is the square
180 7 Linear Models

hinge loss:

l(w, x, y) = max(0, 1 − y(⟨ w, x⟩ + b))2. (7.44)

Besides being a proxy for the 0-1 loss, the squared hinge loss, unlike the hinge
loss, is also differentiable everywhere. This sometimes makes the opti-
mization in the primal easier. Just like in the case of the hinge loss one can
derive the dual of the regularized risk minimization problem and show that
it is a quadratic programming problem (problem 7.5).

7.2.3 Ramp Loss


The ramp loss

l(w, x, y) = min(1 − s, max(0, 1 − y(⟨ w, x⟩ + b))) (7.45)

parameterized by s ≤ 0 is another proxy for the 0-1 loss (see Figure 7.5).
Although not convex, it can be expressed as the difference of two convex
functions

lconc(w, x, y) = max(0, 1 − y(⟨ w, x⟩ + b)) and


lcave(w, x, y) = max(0, s − y(⟨ w, x⟩ + b)).

Therefore the Convex-Concave procedure (CCP) we discussed in Section

Fig. 7.5. The ramp loss depicted here with s = 0.3 − can be viewed as the sumof
a convex function namely the binary hinge loss (left) and a concave function
min(0, 1 — y(⟨ w, x + b)) (right). Viewed alternatively, the ramp loss can be written

as the difference of two convex functions.

3.5.1 can be used to solve the resulting regularized risk minimization problem
with the ramp loss. Towards this end write

1 C Σ
m
C Σm
J(w) = ǁwǁ2 + l conc(w, x i, y i) − l cave (w, x i, y i) . (7.46)
2 m m
i=1 i=1
` ˛¸ x ` ˛¸ x
Jconc (w) Jcave (w)
7.3 Support Vector Regression 181

Recall that at every iteration of the CCP we replace Jcave(w) by its first
order Taylor approximation, computing which requires


m
∂w J(w) = ∂ wl cave (w, x , y ). (7.47)
m i=1
i i

This in turn can be computed as


(
−1 if s > y(⟨ w, x⟩ +
∂w lcave (w, x i , yi ) = δ iy ix i with δ = (7.48)
b)
i
0 otherwise.

Ignoring constant terms, each iteration of the CCP algorithm involves solv-
ing the following minimization problem (also see (3.134))
!
m
Σ C Σ δi yi xi
m
1 C
J(w) = ǁwǁ2 + lconc(w, xi, yi) − m w. (7.49)
2 m
i=1 i=1

Let δ denote a vector in Rm with components δi . Using the same notation


as in (7.9) we can write the following dual optimization problem which is very
closely related to the standard SVM dual (7.9) (see problem 7.6)
1 T T
min α H α − α e (7.50a)
α 2

T
s.t. α y = 0 (7.50b)
C C
— δ ≤ αi ≤ (e − δ). (7.50c)
m m
In fact, this problem can be solved by a SMO solver with minor modifica-
tions. Putting everything together yields Algorithm 7.1.

Algorithm 7.1 CCP for Ramp Loss


0 0
1: Initialize δ and α
2: repeat
3: Solve (7.50) to find αt+1
4: Compute δt+1 using (7.48)
t+1
5: until δ = δt

7.3 Support Vector Regression


As opposed to classification where the labels yi are binary valued, in re-
gression they are real valued. Given a tolerance ϵ, our aim here is to find a
182 7 Linear Models

loss

ϵ y − (⟨ w, x⟩ + b)

Fig. 7.6. The ϵ insensitive loss. All points which lie within the ϵ tube shaded in
gray incur zero loss while points outside incur a linear loss.

hyperplane parameterized by (w, b) such that

|yi − (⟨ w, xi ⟩ + b)| ≤ ϵ. (7.51)

In other words, we want to find a hyperplane such that all the training data
lies within an ϵ tube around the hyperplane. We may not always be able to
find such a hyperplane, hence we relax the above condition by introducing
slack variables ξ+ and ξ− and write the corresponding primal problem as
i i

1 2 CΣ
m + −
min
+ −
ǁwǁ + (ξ i + ξ i ) (7.52a)
w,b,ξ ,ξ 2 m i=1
s.t. yi − (⟨ w, xi⟩ + b) ≤ ϵ + ξi + for all i (7.52b)
(⟨ w, xi⟩ + b) − yi ≤ ϵ + ξi− for all i (7.52c)
ξ+i ≥ 0, and ξ−i ≥ 0. (7.52d)

The Lagrangian can be written by introducing non-negative Lagrange mul-


tipliers α+, α− , β+ and β− :
i i i i

1 C Σm + Σm
L(w, b, ξ + , ξ , α+ , α , β + , β ) = ǁwǁ2 + (ξ i+ ξ )i −
− − − − − −
(β + ξ +i +i β ξ i) i
2 m
i=1 i=1
m

Σ
+ α+i (yi − (⟨ w, xi⟩ + b) − ϵ − ξ+)
i=1
Σ
m
+ α−i ((⟨ w, xi ⟩ + b) − yi − ϵ − ξ − ).
i=1
7.3 Support Vector Regression 183

Taking gradients with respect to the primal variables and setting them to
0, we obtain the following conditions:
Σ m
− α i)xi

w= (α+
i
(7.53)
m i=1 m
Σ Σ −
αi + = αi (7.54)
i=1 i=1

C
α+ + β+ = (7.55)
i m i
− C −
α +β = . (7.56)
i i m
Noting that αi +, , β +,i ≥ 0 and substituting the above conditions into
{ −} { −}

the Lagrangian yields the dual


1Σ +
min (α − α− )(α+ − α− ) ⟨ x , x ⟩ (7.57a)

i i j
α+,α− 2 i j j
i,j
Σ
m Σm
(α+ + α ) − yi (α+ − α )
− −

i i i i
m i=1 m i=1

Σ i Σ i
s.t. α+ = α− (7.57b)
i=1 i=1

C
0 ≤ αi+ ≤ (7.57c)
m

C
0 ≤ α−i ≤
. (7.57d)
m
This is a quadratic programming problem with one equality constraint, and
hence a SMO like decomposition method can be derived for finding the
optimal coefficients α+ and α− (Problem 7.7).
As a consequence of (7.53), analogous to the classification case, one can
map the data via a feature map φ into an RKHS with kernel k and recover
the decision boundary f (x) = ⟨ w, φ(x)⟩ + b via
m m
Σ Σ
f (x) = (α+
i
− α i) ⟨ φ(x)i , φ(x)⟩ + b =

(α+
i
− α− i)k(xi , x) + b. (7.58)
i=1 i=1
Finally, the KKT conditions
C C
− α+i ξ+i = 0 − α− iξ− = i0 and
m m
α−i ((⟨ w, xi ⟩ + b) − yi − ϵ − ξ − ) = 0 i (yi − (⟨ w, x i ⟩ + b) − ϵ − ξ ) = 0,
α+ +
184 7 Linear Models

allow us to draw many useful conclusions:

• Whenever |yi − (⟨ w, xi ⟩ + b)| < ϵ, this implies that +


= ξi− = αi+ =
ξi

αi− = 0. In other words, points which lie inside the ϵ tube around the

hyperplane ⟨ w, x⟩ + b do not contribute to the solution thus leading to


sparse expansions in terms of α.
• If (⟨ w, xi ⟩ +b)−yi > ϵ we have ξi− > 0 and therefore α−i = Cm. On the other
hand, ξ+ = 0− and α+i+ = 0. The case yi —( ⟨w, xi + b) > ϵ is symmetric
and yields ξ = 0, ξ > 0, α+ = C , and α− = 0.
i i m
⟩ −i
• Finally, if (⟨ w, xi⟩ + b) − yi = ϵ we have ξi = 0 and 0 ≤ αi− ≤ Cm, while
ξ+ = 0 and α+i = 0. Similarly, when yi − (⟨ w, xi⟩ + b) = ϵ we obtain
ξi+ = 0, 0 ≤ α+i ≤ Cm, ξ− = 0 and α−i = 0.

Note that α+ and α are never simultaneously non-zero.
i i

7.3.1 Incorporating General Loss Functions


Using the same reasoning as in Section 7.1.1 we can deduce from (7.52) that
the loss function of support vector regression is given by

l(w, x, y) = max(0, |y − ⟨ w, x⟩ | − ϵ). (7.59)

It turns out that the support vector regression framework can be easily
extended to handle other, more general, convex loss functions such as the
ones found in Table 7.1. Different losses have different properties and hence
lead to different estimators. For instance, the square loss leads to penalized
least squares (LS) regression, while the Laplace loss leads to the penalized
least absolute deviations (LAD) estimator. Huber’s loss on the other hand is
a combination of the penalized LS and LAD estimators, and the pinball loss
with parameter τ ∈ [0, 1] is used to estimate τ -quantiles. Setting τ = 0.5
in the pinball loss leads to a scaled version of the Laplace loss. If we define
ξ = y −⟨ w, x⟩ , then it is easily verified that all these losses can all be written
as

l+ (ξ − ϵ) if ξ > ϵ
l (−ξ − ϵ)

l(w, x, y) = if ξ < ϵ (7.60)
0 if ξ ∈ [−ϵ, ϵ].

For all these different loss functions, the support vector regression formu-
7.3 Support Vector Regression 185

lation can be written in a unified fashion as follows


m
1 2 C Σ + + − −
min ǁwǁ + l (ξ i ) + l (ξ i ) (7.61a)
w,b,ξ+ ,ξ − 2 m i=1
s.t. yi − (⟨ w, xi⟩ + b) ≤ ϵ + ξi + for all i (7.61b)
(⟨ w, xi⟩ + b) − yi ≤ ϵ + ξi− for all i (7.61c)
ξ+ ≥ 0, and ξ− ≥ 0. (7.61d)
i i

The dual in this case is given by


1Σ +
(α − α )(α+ − α ) ⟨ x , x ⟩
− −
min (7.62a)

i i j
α+,α− 2 i j j
i,j

m Σm Σm
— T + (ξ + ) + T − (ξ − ) + ϵ (α+ + α− ) − y (α+ − α− )
i i i i i
m
m i=1 m i=1 i=1

Σ i Σ i
s.t. α+ = α− (7.62b)
i=1 i=1

C
0≤α ≤
{ +,− } { +,− } { +,− }
∂ l (ξ ) (7.62c)
i
m ξ i
{+,− }
0 ≤ ξi (7.62d)

C
| ≥α
{ −} { +,− } { −} { +,− }
ξ i+, = inf ξ ∂ lξ +, i . (7.62e)
m
Here T +(ξ) = l+(ξ) − ξ∂ξl+(ξ) and T − (ξ) = l− (ξ) − ξ∂ ξl− (ξ). We now show how
+ −
(7.62) can be specialized to the pinball loss. Clearly, l (ξ) = τξ while l (−ξ) =
(τ −1)ξ, and hence l (ξ) = (1−τ )ξ. Therefore, T +(ξ) = (τ −1)ξ − ξ(τ − 1) =

0. Similarly T − (ξ) = 0. Since ∂ξl+(ξ) = τ and ∂ξl− (ξ) = (1 − τ ) for all ξ ≥ 0,


{ −}
it follows that the bounds on α +, can be computed as
0 ≤ α+i ≤ C m
τ and 0 ≤ α− i≤ C (1 m − τ ). If we denote α = α − α and
+ −

Table 7.1. Various loss functions which can be used in support vector
regression. For brevity we denote y − ⟨ w, x⟩ as ξ and write the loss
l(w, x, y) in terms of ξ.
ϵ-insensitive loss max(0, |ξ| − ϵ)
Laplace loss |ξ|
Square loss 1
|ξ|2
2
1 2

ξ if |ξ| ≤ σ
Huber’s robust loss σ
|ξ| − 2 otherwise
τξ if ξ ≥ 0
Pinball loss
(τ − 1)ξ otherwise.
186 7 Linear Models

observe that ϵ = 0 for the pinball loss then (7.62) specializes as follows:
m
min 1 Σ α α ⟨ x , x ⟩ −Σ y α (7.63a)
i j i j i i
α 2 i,j i=1
Σ
m

s.t. αi = 0 (7.63b)
i=1
C C
(τ − 1) ≤ αi ≤
τ. (7.63c)
m m
Similar specializations of (7.62) for other loss functions in Table 7.1 can be
derived.

7.3.2 Incorporating the ν Trick


One can also incorporate the ν trick into support vector regression. The
primal problem obtained after incorporating the ν trick can
! be written as
min 1
ǁwǁ2 + 1 Σ
m −
(ξ+ + ξ ) (7.64a)
ϵ+

νm i i
w,b,ξ+,ξ−,є 2 i=1

s.t. (⟨ w, xi⟩ + b) − yi ≤ ϵ + iξ+ for all i (7.64b)


yi − (⟨ w, xi⟩ + b) ≤ ϵ + ξi− for all i (7.64c)
ξi+ ≥ 0, ξ−i ≥ 0, and ϵ ≥ 0. (7.64d)
Proceeding as before we obtain the following simplified dual
m
min 1 Σ(α− − α+)(α− − α+) ⟨ x , x ⟩ − Σ y (α− − α+) (7.65a)
i i j i i i
α +,α− 2 i j j
i,j i=1
m

Σ
s.t. (α−i − α+i) = 0 (7.65b)
i=1
m
Σ +

(α +i α ) = 1 (7.65c)i
i=1
1
0 ≤ αi+ ≤ (7.65d)
νm

1
0 ≤ α−i ≤ . (7.65e)
νm

7.4 Novelty Detection


The large margin approach can also be adapted to perform novelty detection
or quantile estimation. Novelty detection is an unsupervised task where one
7.4 Novelty Detection 187

is interested in flagging a small fraction of the input X = {x1, . . . , xm} as atypical


or novel. It can be viewed as a special case of the quantile estimation task,
where we are interested in estimating a simple set C such that Pr(x ∈ C) ≥ µ
for some µ ∈ [0, 1]. One way to measure simplicity is to use the volume of
the set. Formally, if |C| denotes the volume of a set, then the quantile
estimation task is to estimate

arginf{|C| s.t. Pr(x ∈ C) ≥ µ}. (7.66)

Given the input data X one can compute the empirical density
(
1
if x ∈ X
p̂(x) = m

0 otherwise,

and estimate its (not necessarily unique) µ-quantiles. Unfortunately, such


estimates are very brittle and do not generalize well to unseen data. One
possible way to address this issue is to restrict C to be simple subsets such
as spheres or half spaces. In other words, we estimate simple sets which
contain µ fraction of the dataset. For our purposes, we specifically work
with half-spaces defined by hyperplanes. While half-spaces may seem rather
restrictive remember that the kernel trick can be used to map data into
a high-dimensional space; half-spaces in the mapped space correspond to
non-linear decision boundaries in the input space. Furthermore, instead of
explicitly identifying C we will learn an indicator function for C, that is, a
function f which takes on values −1 inside C and −1 elsewhere.
With 12ǁwǁ2 as a regularizer, the problem of estimating a hyperplane such
that a large fraction of the points in the input data X lie on one of its sides
can be written as:
1 1 Σ m
min ǁwǁ 2+ ξi — ρ (7.67a)
w,ξ,ρ 2 νm
i=1

s.t. ⟨ w, xi⟩ ≥ ρ − ξi for all i (7.67b)


ξi ≥ 0. (7.67c)

Clearly, we want ρ to be as large as possible so that the volume of the half-


space ⟨ w, x⟩ ≥ ρ is minimized. Furthermore, ν ∈ [0, 1] is a parameter which
is analogous to ν we introduced for the ν-SVM earlier. Roughly speaking,
it denotes the fraction of input data for which ⟨ w, xi⟩ ≤ ρ. An alternative
interpretation of (7.67) is to assume that we are separating the data set X
from the origin (See Figure 7.7 for an illustration). Therefore, this method
is also widely known as the one-class SVM.
188 7 Linear Models

Fig. 7.7. The novelty detection problem can be viewed as finding a large margin
hyperplane which separates ν fraction of the data points away from the origin.

The Lagrangian of (7.67) can be written by introducing non-negative


Lagrange multipliers αi, and βi:

1 Σ Σ Σ
m
1
L(w, ξ, ρ, α, β) = ǁwǁ2 + ξ — ρ + m α (ρ − ξ — ⟨ w, x ⟩ ) −m β ξ .
2 νm i=1 i=1 i=1
i i i i i i

By taking gradients with respect to the primal variables and setting them
to 0 we obtain
Σ
m
w= αixi (7.68)
i=1
1 1
αi = − βi ≤ (7.69)
νm νm
Σ
m
αi = 1. (7.70)
i=1

Noting that αi, βi ≥ 0 and substituting the above conditions into the La-
grangian yields the dual
min 1 Σ α α ⟨ x , x ⟩ (7.71a)
α 2 i j i j
i,j
1
s.t. 0 ≤ αi ≤ (7.71b)
νm
Σ
m
αi = 1. (7.71c)
i=1
7.5 Margins and Probability 189

This can easily be solved by a straightforward modification of the SMO


algorithm (see Section 7.1.3 and Problem 7.7). Like in the previous sections,
an analysis of the KKT conditions shows that 0 < α if and only if ⟨ w, xi⟩ ≤ ρ;
such points are called support vectors. As before, we can replace ⟨ xi, xj⟩
bya kernel k(xi, xj) to transform half-spaces in the feature space to non-linear
shapes in the input space. The following theorem explains the significance
of the parameter ν.

Theorem 7.5 Assume that the solution of (7.71) satisfies ρ /= 0, then the
following statements hold:
(i) ν is an upper bound on the fraction of support vectors, that is points
for which ⟨ w, xi⟩ ≤ ρ.
(ii) Suppose the data X were generated independently from a distribution
p(x) which does not contain discrete components. Moreover, assume
that the kernel k is analytic and non-constant. With probability 1,
asympotically, ν equals the fraction of support vectors.

7.5 Margins and Probability


discuss the connection between probabilistic models and linear classifiers.
issues of consistency, optimization, efficiency, etc.

7.6 Beyond Binary Classification


In contrast to binary classification where there are only two possible ways
to label a training sample, in some of the extensions we discuss below each
training sample may be associated with one or more of k possible labels.
Therefore, we will use the decision function
y∗ = argmax f (x, y) where f (x, y) = ⟨ φ(x, y), w⟩ . (7.72)
y∈ {1,. ,k}

Recall that the joint feature map φ(x, y) was introduced in section 7.1.2. One
way to interpret the above equation is to view f (x, y) as a compatibility score
between instance x and label y; we assign the label with the highest
compatibility score to x. There are a number of extensions of the binary
hinge loss (7.13) which can be used to estimate this score function. In all
these cases the objective function is written as
m
λ 2 1 Σ
min J(w) := ǁwǁ + l(w, x i, y i). (7.73)
w 2 m i=1
190 7 Linear Models
Here λ is a scalar which trades off the regularizer 1 ǁwǁ2 with the empirical
1
Σm 2
risk m i=1 l(w, xi, yi). Plugging in different loss functions yields classifiers
for different settings. Two strategies exist for finding the optimal w. Just like
in the binary SVM case, one can compute and maximize the dual of (7.73).
However, the number of dual variables becomes m|Y|, where m is the number of
training points and |Y| denotes the size of the label set. The second strategy is to
optimize (7.73) directly. However, the loss functions we discuss below are non-
smooth, therefore non-smooth optimization algorithms such as bundle
methods (section 3.2.7) need to be used.

7.6.1 Multiclass Classification


In multiclass classification a training example is labeled with one of k pos-
sible labels, that is, Y = {1, . . . , k}. We discuss two different extensions of the
binary hinge loss to the multiclass setting. It can easily be verified that
setting Y = {±1} and φ(x, y) = y φ(x) 2
recovers the binary hinge loss in both
cases.

7.6.1.1 Additive Multiclass Hinge Loss


A natural generalization of the binary hinge loss is to penalize all labels
which have been misclassified. The loss can now be written as
Σ
l(w, x, y) = max 0, 1 − ( φ(x, y) − φ(x, yJ), w ) . (7.74)
y′ y

7.6.1.2 Maximum Multiclass Hinge Loss


Another variant of (7.13) penalizes only the maximally violating label:

− φ(x, y) − φ(x, y ), w )
J
l(w, x, y) := max 0, max(1

. (7.75)
/ y
y=

Note that both (7.74) and (7.75) are zero whenever

f (x, y) = ⟨ φ(x, y), w⟩ ≥ 1 + max



φ(x, yJ), w = 1 + ′max f (x, yJ). (7.76)
y /= y y
y
In other words, they both ensure an adequate margin of separation, in this
case 1, between the score of the true label f (x, y) and every other label
f (x, yJ). However, they differ in the way they penalize violators, that is, la- bels
yJ /= y for which f (x, y) ≤ 1 + f (x, yJ). In one case we linearly penalize the
violators and sum up their contributions while in the other case we lin- early
penalize only the maximum violator. In fact, (7.75) can be interpreted
7.6 Beyond Binary Classification 191

as the log odds ratio in the exponential family. Towards this end choose η
such that log η = 1 and rewrite (7.20):
p(y|x, w)
log = φ(x, y) − max φ(x, yJ), w ≥ 1.
maxy′ y p(yJ|x, w) y′/=y

Rearranging yields (7.76).

7.6.2 Multilabel Classification


In multilabel classification one or more of k possible labels are assigned to
a training example. Just like in the multiclass case two different losses can
be defined.

7.6.2.1 Additive Multilabel Hinge Loss


If we let Yx ⊆ Y denote the labels assigned to x, and generalize the hinge loss
to penalize all labels y ∈/ Yx which have been assigned higher score than some
J

y ∈ Yx, then the loss can be written as

Σ
l(w, x, y) = max 0, 1 − ( φ(x, y) − φ(x, yJ), w ) . (7.77)
y∈ Y x and y′ ∈/Y x

7.6.2.2 Maximum Multilabel Hinge Loss


Another variant only penalizes the maximum violating pair. In this case the
loss can be written as

l(w, x, y) = max 0, max 1− φ(x, y) − φ(x, yJ), w .................. (7.78)


y∈ Y x ,y′ ∈/ Y x

One can immediately verify that specializing the above losses to the mul-
ticlass case recovers (7.74) and (7.75) respectively, while the binary case
recovers (7.13). The above losses are zero only when
min f (x, y) = min ⟨ φ(x, y), w⟩ ≥ 1 + max φ(x, yJ ), w = 1 + max f (x, yJ).
y∈ Y x y∈ Y x y′ ∈/ Y x y′ ∈/ Y x

This can be interpreted as follows: The losses ensure that all the labels
assigned to x have larger scores compared to labels not assigned to x with the
margin of separation of at least 1.
Although the above loss functions are compatible with multiple labels,
the prediction function argmaxy f (x, y) only takes into account the labelwith
the highest score. This is a significant drawback of such models, which can be
overcome by using a multiclass approach instead. Let |Y| be thesize of
the label set and z ∈ R|Y| denote a vector with ±1 entries. We set
192 7 Linear Models

zy = +1 if the y ∈ Yx and zy = −1 otherwise, and use the multiclass loss


(7.75) on z. To predict we compute z∗ = argmax z f (x, z) and assign to x
the labels corresponding to components of z∗ which are +1. Since z can take
on 2|Y| possible values, this approach is not feasible if |Y| is large. To tackle such
problems, and to further reduce the computational complexity we assume
that the labels correlations are captured via a |Y| × |Y| positive semi-definite
matrix P , and φ(x, y) can be written as φ(x) ⊗ Py. Here ⊗ denotes the
Kronecker product. Furthermore, we express the vector w asa n × |Y| matrix
W , where n denotes the dimension of φ(x). With these assumptions ⟨ φ(x) ⊗ P
(z − zJ), w⟩ can be rewritten as
D E Σh i
φ(x)T W P, (z − z J ) = φ(x)T W P (zi − ziJ ),
i
i

and (7.78) specializes to


!!
Σ h i
l(w, x, z) := max 0, 1 − (zi − ziJ )
T
min φ(x) W P . (7.79)
zi′=zi i
i

A analogous specialization of (7.77) can also be derived wherein the mini-


mum is replaced by a summation. Since the minimum (or summation as the
case may be) is over |Y| possible labels, computing the loss is tractable even
if the set of labels Y is large.

7.6.3 Ordinal Regression and Ranking


We can generalize our above discussion to consider slightly more general
ranking problems. Denote by Y the set of all directed acyclic graphs on N
nodes. The presence of an edge (i, j) in y ∈ Y indicates that i is preferred
to j. The goal is to find a function f (x, i) which imposes a total order on
{1, . . . , N } which is in close agreement with y. Specifically, if the estimation
error is given by the number of subgraphs of y which are in disagreement with
the total order imposed by f , then the additive version of the loss canbe
written as
Σ
l(w, x, y) = max (0, 1 − (f (x, i) − f (x, j))) , (7.80)
(i,j)∈ G
G∈ A(y)

where A(y) denotes the set of all possible subgraphs of y. The maximum
margin version, on the other hand, is given by
l(w, x, y) = max max (0, 1 − (f (x, i) − f (x, j))) . (7.81)
G∈ A(y) (i,j)∈ G
7.7 Large Margin Classifiers with Structure 193

In other words, we test for each subgraph G of y if the ranking imposed by G


is satisfied by f . Selecting specific types of directed acyclic graphs recovers
the multiclass and multilabel settings (problem 7.9).

7.7 Large Margin Classifiers with Structure


7.7.1 Margin
define margin pictures

7.7.2 Penalized Margin


different types of loss, rescaling

7.7.3 Nonconvex Losses


the max - max loss

7.8 Applications
7.8.1 Sequence Annotation
7.8.2 Matching
7.8.3 Ranking
7.8.4 Shortest Path Planning
7.8.5 Image Annotation
7.8.6 Contingency Table Loss
7.9 Optimization
7.9.1 Column Generation
subdifferentials

7.9.2 Bundle Methods


7.9.3 Overrelaxation in the Dual
when we cannot do things exactly
194 7 Linear Models

7.10 CRFs vs Structured Large Margin Models


7.10.1 Loss Function
7.10.2 Dual Connections
7.10.3 Optimization
Problems
Problem 7.1 (Deriving the Margin {1}) Show that the distance of a
point xi to a hyperplane H = {x| ⟨ w, x⟩ + b = 0} is given by | ⟨ w, xi⟩
+ b |/ ǁwǁ.

Problem 7.2 (SVM without Bias {1}) A homogeneous hyperplane is one


which passes through the origin, that is,

H = {x| ⟨ w, x⟩ = 0}. (7.82)

If we devise a soft margin classifier which uses the homogeneous hyperplane


as a decision boundary, then the corresponding primal optimization problem
can be written as follows:
1Σ 2 m
min ǁ wǁ + C ξi (7.83a)
w,ξ 2
i=1
s.t. yi ⟨ w, xi⟩ ≥ 1 − ξi for all i (7.83b)
ξi ≥ 0, (7.83c)

Derive the dual of (7.83) and contrast it with (7.9). What changes to the SMO
algorithm would you make to solve this dual?

Problem 7.3 (Deriving the simplified ν-SVM dual {2}) In Lemma 7.2we
Σ
i i ≥
used (7.41) to show that the constraint α 1 can be replaced by
Σ
i αi = 1. Show that an equivalent way to arrive at the same conclusion is
by arguing that the constraint ρ ≥ 0 is redundant in the primal (7.40). Hint:
Observe that whenever ρ < 0 the objective function is always non-negative.
On the other hand, setting w = ξ = b = ρ = 0 yields an objective function
value of 0.

Problem 7.4 (Fenchel and Lagrange Duals {2}) We derived the La- grange
dual of (7.12) in Section 7.1 and showed that it is (7.9). Derive the Fenchel
dual of (7.12) and relate it to (7.9). Hint: See theorem 3.3.5 of [BL00].
7.10 CRFs vs Structured Large Margin Models 195

Problem 7.5 (Dual of the square hinge loss {1}) The analog of (7.5)
when working with the square hinge loss is the following
m
1 C Σ
2
min ǁwǁ + ξ2 (7.84a)
w,b,ξ 2 m i=1 i

s.t. yi(⟨ w, xi⟩ + b) ≥ 1 − ξi for all i (7.84b)


ξi ≥ 0, (7.84c)
Derive the Lagrange dual of the above optimization problem and show that
it a Quadratic Programming problem.

Problem 7.6 (Dual of the ramp loss {1}) Derive the Lagrange dual of
(7.49) and show that it the Quadratic Programming problem (7.50).

Problem 7.7 (SMO for various SVM formulations {2}) Derive an SMO
like decomposition algorithm for solving the dual of the following problems:
• ν-SVM (7.41).
• SV regression (7.57).
• SV novelty detection (7.71).

Problem 7.8 (Novelty detection with Balls {2}) In Section 7.4 we as-
sumed that we wanted to estimate a halfspace which contains a major frac-
tion of the input data. An alternative approach is to use balls, that is, we
estimate a ball of small radius in feature space which encloses a majority of
the input data. Write the corresponding optimization problem and its dual.
J
Show that if the kernel is translation invariant, that is, k(x, x ) depends only
on ǁx − x ǁ then the optimization problem with balls is equivalent to (7.71).
J

Explain why this happens geometrically.

Problem 7.9 (Multiclass and Multilabel loss from Ranking Loss {1})
Show how the multiclass (resp. multilabel) losses (7.74) and (7.75) (resp.
(7.77) and (7.79)) can be derived as special cases of (7.80) and (7.81) re-
spectively.

Problem 7.10 Invariances (basic loss)

Problem 7.11 Polynomial transformations - SDP constraints


Appendix 1

Linear Algebra and Functional Analysis

A1.1 Johnson Lindenstrauss Lemma


Lemma 1.1 (Johnson Lindenstrauss) Let X be a set of n points in Rd
represented as a n × d matrix A. Given ϵ, β > 0 let
4 + 2β
k≥ log n (1.1)
ϵ2/2 − ϵ3/3
be a positive integer. Construct a d × k random matrix R with independent
standard normal random variables, that is, Rij ∼ N(0, 1), and let
1
E = √ AR. (1.2)
k
Define f : Rd → Rk as the function which maps the rows of A to the rows
of E. With probability at least 1 − n β, for all u, v ∈ X we have

(1 − ϵ) ǁu − vǁ2 ≤ ǁf (u) − f (v)ǁ2 ≤ (1 + ϵ) ǁu − vǁ2 . (1.3)

Our proof presentation by and large follows [?]. We first show that

Lemma 1.2 For any arbitrary vector α ∈ Rd let qi denote the i-th compo-
nent of f (α). Then qi ∼ N(0, ǁαǁ 2 /k) and hence
h i Σ
E ǁf (α)ǁ k qi2 = ǁαǁ2 . (1.4)
2
= i=1 E

In other words, the expected length of vectors are preserved even after em-
bedding them in a k dimensional space. Next we show that the lengths of the
embedded vectors are tightly concentrated around their mean.

Lemma 1.3 For any ϵ > 0 and any unit vector α ∈ Rd we have
k
Pr ǁf (α)ǁ2 > 1 + ϵ < exp − ϵ2/2 − ϵ3/3 (1.5)
2
k
Pr ǁf (α)ǁ2 < 1 − ϵ < exp − ϵ2/2 − ϵ3/3 . (1.6)
2

197
198 1 Linear Algebra and Functional Analysis

Corollary 1.4 If we choose k as in (1.1) then for any α ∈ Rd we have

2
Pr (1 − ϵ) ǁαǁ2 ≤ ǁf (α)ǁ2 ≤ (1 + ϵ) ǁαǁ2 ≥ 1 − . (1.7)
n 2+β

Proof Follows immediately from Lemma 1.3 by setting

k 2
2 exp − ϵ2/2 − ϵ3/3 ≤ ,
2 n 2+β

and solving for k.


There are 2n pairs of vectors u, v in X, and their corresponding distances
ǁu − vǁ are preserved within 1 ± ϵ factor as shown by the ab ov e lemma.
Therefore, the probability of not satisfying (1.3) is bounded by n2 · n2+β
2
<
1/nβ as claimed in the Johnson Lindenstrauss Lemma. All that remains is
to prove Lemma 1.2 and 1.3.
Σ
Proof (Lemma 1.2). Since qi = √1 k j Rij αj is a linear combination of stan-
dard normal random variables Rij it follows that qi is normally distributed.
To compute the mean note that

1 Σ
E [qi ] = √ αj E [Rij ] = 0.
k j

Since Rij are independent zero mean unit variance random variables, E [Rij Ril ] =
1 if j = l and 0 otherwise. Using this

2
1 Σ
d

d Σ
d 1Σ
d 2 1 2
2

E qi Rijαj =
= E k αj αl E [Rij Ril ] = αj = ǁαǁ .
k j=1 j=1 l=1 k j=1 k

Proof (Lemma 1.3). Clearly, for all λ

h i h i
Pr ǁf (α)ǁ2 > 1 + ϵ = Pr exp λ ǁf (α)ǁ2 > exp(λ(1 + ϵ)) .
A1.1 Johnson Lindenstrauss Lemma 199

Using Markov’s inequality (Pr[X ≥ a] ≤ E[X]/a) we obtain


h i
h i E exp λ ǁf (α)ǁ2
Pr exp λ ǁf (α)ǁ2 > exp(λ(1 + ϵ)) ≤
exp(λ(1 + ϵ))

h Σ i
E exp λ ki=1 q 2
i
=
exp(λ(1 + ϵ))
hQ i
k exp λq2
E i=1
=
i

exp(λ(1 + ϵ))
!k
2 Eexp
i
= λ . (1.8)
λq k exp
(1 + ϵ)
The last equality is because the qi ’s are i.i.d. Since α is a unit vector, from
the previous lemma qi ∼ N(0, 1/k). Therefore, kq 2i is a χ2 random variable
with moment generating function
λ 2 1
E exp λq2 = E exp kq =q .
i
k
i
k 1 − 2λ
Plugging this into (1.8)
k
h 2 i exp − kλ(1 + ϵ)
.
> exp (λ(1 + ϵ)) ≤ q k
Pr exp λ ǁf (α)ǁ 1− 2λ


Setting λ = 2(1+є) in the above inequality and simplifying
h i
Pr exp λ ǁf (α)ǁ2 > exp(λ(1 + ϵ)) ≤ (exp(−ϵ)(1 + ϵ))k/2 .

Using the inequality


log(1 + ϵ) < ϵ − ϵ2/2 + ϵ3/3
we can write
h i
k ǁ ǁ
2
Pr exp λ f (α) ≤ > exp(λ(1 + ϵ)) − exp − ϵ2/2 ϵ3/3 .
2
This proves (1.5). To prove (1.6) we need to repeat the above steps and use
the inequality
log(1 − ϵ) < −ϵ − ϵ2/2.
This is left as an exercise to the reader.
200 1 Linear Algebra and Functional Analysis

A1.2 Spectral Properties of Matrices


A1.2.1 Basics
A1.2.2 Special Matrices
unitary, hermitean, positive semidefinite

A1.2.3 Normal Forms


Jacobi

A1.3 Functional Analysis


A1.3.1 Norms and Metrics
vector space, norm, triangle inequality

A1.3.2 Banach Spaces


normed vector space, evaluation functionals, examples, dual space

A1.3.3 Hilbert Spaces


symmetric inner product

A1.3.4 Operators
spectrum, norm, bounded, unbounded operators

A1.4 Fourier Analysis


A1.4.1 Basics
A1.4.2 Operators
Appendix 2

Conjugate Distributions

201
202 2 Conjugate Distributions

Binomial — Beta
φ(x) = x
Γ(nν + 1)Γ(n(1 − ν) + 1)
eh(nν,n) = = B(nν + 1, n(1 − ν) + 1)
Γ(n + 2)
In traditional notation one represents the conjugate as
Γ(α + β)
p(z; α, β) = zα− 1(1 − z)β− 1
Γ(α)Γ(β)
where α = nν + 1 and β = n(1 − bν) + 1.
Multinomial — Dirichlet
φ(x) = eQx
d
Γ(nνi + 1)
h(nν,n) i=1
e =
Γ(n + d)
In traditional notation one represents the conjugate as
Σ
Γ( di=1 αi) Y αi − 1
d
p(z; α) = Q d zi
i=1 Γ(αi) i=1

where αi = nνi + 1
Poisson — Gamma
φ(x) = x

eh(nν,n) = n− nν Γ(nν)

In traditional notation one represents the conjugate as

p(z; α) = β− αΓ(α)zα− 1e− βx


where α = nν and β = n.
• Multinomial / Binomial
• Gaussian
• Laplace
• Poisson
• Dirichlet
• Wishart
• Student-t
• Beta
• Gamma
Appendix 3

Loss Functions

A3.1 Loss Functions


A multitude of loss functions are commonly used to derive seemingly differ-
ent algorithms. This often blurs the similarities as well as subtle differences
between them, often for historic reasons: Each new loss is typically accompa-
nied by at least one publication dedicated to it. In many cases, the loss is not
spelled out explicitly either but instead, it is only given by means of a con-
strained optimization problem. A case in point are the papers introducing
(binary) hinge loss [BM92, CV95] and structured loss [TGK04, TJHA05].
Likewise, a geometric description obscures the underlying loss function, as
in novelty detection [SPST+ 01].
In this section we give an expository yet unifying presentation of many
of those loss functions. Many of them are well known, while others, such
as multivariate ranking, hazard regression, or Poisson regression are not
commonly used in machine learning. Tables A3.1 and A3.1 contain a choice
subset of simple scalar and vectorial losses. Our aim is to put the multitude
of loss functions in an unified framework, and to show how these losses
and their (sub)gradients can be computed efficiently for use in our solver
framework.
Note that not all losses, while convex, are continuously differentiable. In
this situation we give a subgradient. While this may not be optimal, the
convergence rates of our algorithm do not depend on which element of the
subdifferential we provide: in all cases the first order Taylor approximation
is a lower bound which is tight at the point of expansion.
In this setion, with little abuse of notation, vi is understood as the i-th
component of vector v when v is clearly not an element of a sequence or a
set.

A3.1.1 Scalar Loss Functions


It is well known [Wah97] that the convex optimization problem

min ξ subject to y ⟨ w, x⟩ ≥ 1 − ξ and ξ ≥ 0 (3.1)


ξ

203
Logistic [CSS00] log(1 + exp(−yf )) −y/(1 + exp(−yf ))
+
Novelty [SPST 01] max(0, ρ − f ) 0 if f ≥ ρ and −1 otherwise
Least mean squares [Wil98] 1
2
(f − y) 2 f −y
Least absolute deviation |f − y| sign(f − y)
Quantile regression [Koe05] max(τ (f − y), (1 − τ )(y − f )) τ if f > y and τ − 1 otherwise
ϵ-insensitive [VGS97] max(0, |f − y| − ϵ) 0 if |f − y| ≤ ϵ, else sign(f − y)
3 Loss Functions Huber’s robust loss [MSR+ 97] 1
(f − y)2 if |f − y| ≤ 1, else |f − y| − 1
f − y if |f − y| ≤ 1, else sign(f − y)
2 2
Poisson regression [Cre93] exp(f ) − yf exp(f ) − y

Vectorial loss functions and their derivatives, depending on the vector f := Wx and on y.

Loss Derivative
Soft-Margin Multiclass [TGK04] maxy′ (fy′ − f y + ∆(y, yJ)) ey ∗ − e y

[CS03] where y is the argmax of the loss
Scaled Soft-Margin Multiclass maxy′ Γ(y, yJ)(fy ′ − f y + ∆(y, yJ)) Γ(y, yJ)(ey∗ − ey)
[TJHA05] where y∗ is the argmax of the loss
Σ hΣ i Σ
Softmax Multiclass [CDLS99] log y′ exp(fy ′ ) − fy y ′ e y ′ exp(f J
y ) / y′ exp(fyJ ) − ey

(f − y) M (f − y) where M ≥ 0 M (f − y)
1 T
Multivariate Regression 2
204
A3.1 Loss Functions 205

takes on the value max(0, 1 − y ⟨ w, x⟩ ). The latter is a convex function in


w and x. Likewise, we may rewrite the ϵ-insensitive loss, Huber’s robust loss,
the quantile regression loss, and the novelty detection loss in terms of loss
functions rather than a constrained optimization problem. In all cases,
⟨ w, x⟩ will play a key role insofar as the loss is convex in terms of the scalar
quantity ⟨ w, x⟩ . A large number of loss functions fall into this category,
as described in Table A3.1. Note that not all functions of this type are
continuously differentiable. In this case we adopt the convention that
(
∂xf (x) if f (x) ≥ g(x)
∂x max(f (x), g(x)) = (3.2)
∂xg(x) otherwise .

Since we are only interested in obtaining an arbitrary element of the subd-


ifferential this convention is consistent with our requirements.
Let us discuss the issue of efficient computation. For all scalar losses we
may write l(x, y, w) = ¯l(⟨ w, x⟩ , y), as described in Table A3.1. In this case
a simple application of the chain rule yields that ∂w l(x, y, w) = ¯l (⟨ w, x⟩ , y)·
J

x.
For instance, for squared loss we have
¯l(⟨ w, x⟩ , y) = 1 (⟨ w, x⟩ − y)2 and ¯lJ (⟨ w, x⟩ , y) = ⟨ w, x⟩ − y.
2

Consequently, the derivative of the empirical risk term is given by


m
1 Σ ¯J
∂w Remp (w) = l (⟨ w, xi ⟩ , iy ) · ix . (3.3)
m
i=1

This means that if we want to compute l and ∂wl on a large number of


observations xi, represented as matrix X, we can make use of fast linear
algebra routines to pre-compute the vectors
f = X w and g X where gi = ¯l (fi , yi ).
T J
(3.4)
This is possible for any of the loss functions listed in Table A3.1, and many
other similar losses. The advantage of this unified representation is that im-
plementation of each individual loss can be done in very little time. The
computational infrastructure for computing Xw and g T X is shared. Eval-
uating ¯l(fi , yi ) and l̄J (fi , yi ) for all i can be done in O(m) time and it is
not time-critical in comparison to the remaining operations. Algorithm 3.1
describes the details.
An important but often neglected issue is worth mentioning. Computing f
requires us to right multiply the matrix X with the vector w while computing
g requires the left multiplication of X with the vector g T . If X is stored in a
row major format then Xw can be computed rather efficiently while g T X is
206 3 Loss Functions

Algorithm 3.1 ScalarLoss(w, X, y)


1: input: Weight vector w, feature matrix X, and labels y
2: Compute f = Xw
Σ
3: Compute r = ¯ ¯J
i l(f i , yi ) and g = l (f, y)
4: g ← g X
T

5: return Risk r and gradient g

expensive. This is particularly true if X cannot fit in main memory. Converse


is the case when X is stored in column major format. Similar problems are
encountered when X is a sparse matrix and stored in either compressed row
format or in compressed column format.

A3.1.2 Structured Loss


In recent years structured estimation has gained substantial popularity in
machine learning [TJHA05, TGK04, BHS+07]. At its core it relies on two types
of convex loss functions: logistic loss:
Σ
l(x, y, w) = log exp w, φ(x, yJ) − ⟨ w, φ(x, y)⟩ , (3.5)
y′∈ Y

and soft-margin loss:


J J J
l(x, y, w) = max Γ(y, y ) w, φ(x, y ) φ(x, y)− + ∆(y, y ). (3.6)
y′∈ Y

Here φ(x, y) is a joint feature map, ∆(y, yJ) ≥ 0 describes the cost of mis-
classifying y by yJ, and Γ(y, yJ) ≥ 0 is a scaling term which indicates by how
much the large margin property should be enforced. For instance, [TGK04]
J J J
choose Γ(y, y ) = 1. On the other hand [TJHA05] suggest Γ(y, y ) = ∆(y, y ),
which reportedly yields better performance. Finally, [McA07] recently sug-
gested generic functions Γ(y, yJ).
The logistic loss can also be interpreted as the negative log-likelihood of
a conditional exponential family model:

p(y|x; w) := exp(⟨ w, φ(x, y)⟩ − g(w|x)), (3.7)

where the normalizing constant g(w|x), often called the log-partition func-
tion, reads
Σ
g(w|x) := log exp w, φ(x, yJ) . (3.8)
y′∈ Y
A3.1 Loss Functions 207

As a consequence of the Hammersley-Clifford theorem [Jor08] every expo-


nential family distribution corresponds to a undirected graphical model. In
our case this implies that the labels y factorize according to an undirected
graphical model. A large number of problems have been addressed by this
setting, amongst them named entity tagging [LMP01], sequence alignment
[TJHA05], segmentation [RSS+ 07] and path planning [RBZ06]. It is clearly
impossible to give examples of all settings in this section, nor would a brief
summary do this field any justice. We therefore refer the reader to the edited
volume [BHS+ 07] and the references therein.
If the underlying graphical model is tractable then efficient inference al-
gorithms based on dynamic programming can be used to compute (3.5) and
(3.6). We discuss intractable graphical models in Section A3.1.2.1, and now
turn our attention to the derivatives of the above structured losses.
When it comes to computing derivatives of the logistic loss, (3.5), we have
Σ J J
y′ φ(x, y ) exp ⟨ w, φ(x, y )⟩
∂wl(x, y, w) = Σ J
− φ(x, y) (3.9)
y′ exp ⟨ w, φ(x, y )⟩
= Ey′ ∼ p(y ′ |x) φ(x, y ) − φ(x, y).
J
(3.10)

where p(y|x) is the exponential family model (3.7). In the case of (3.6) we
denote by ȳ(x) the argmax of the RHS, that is

ȳ(x) := argmax Γ(y, y J ) w, φ(x, yJ ) − φ(x, y) + ∆(y, yJ ). (3.11)


y′

This allows us to compute the derivative of l(x, y, w) as

∂w l(x, y, w) = Γ(y, ȳ(x)) [φ(x, ȳ(x)) − φ(x, y)] . (3.12)

In the case where the loss is maximized for more than one distinct value ȳ(x)
we may average over the individual values, since any convex combination of
such terms lies in the subdifferential.
Note that (3.6) majorizes ∆(y, y∗), where y∗ := argmaxy′ ⟨ w, φ(x, yJ)⟩
[TJHA05]. This can be seen via the following series of inequalities:

∆(y, y∗) ≤ Γ(y, y∗) ⟨ w, φ(x, y∗) − φ(x, y)⟩ + ∆(y, y∗) ≤ l(x, y, w).

The first inequality follows because Γ(y, y∗) ≥ 0 and y∗ maximizes ⟨ w, φ(x, yJ)⟩
thus implying that Γ(y, y∗) ⟨ w, φ(x, y∗) − φ(x, y)⟩ ≥ 0. The second inequal-
ity follows by definition of the loss.
We conclude this section with a simple lemma which is at the heart of
several derivations of [Joa05]. While the proof in the original paper is far from
trivial, it is straightforward in our setting:
208 3 Loss Functions
J
Lemma 3.1 Denote by δ(y, y ) a loss and let φ(xi, yi) be a feature map for
observations (xi, yi) with 1 ≤ i ≤ m. Moreover, denote by X, Y the set of all
m patterns and labels respectively. Finally let
m m
Σ Σ
Φ(X, Y ) := φ(xi , yi ) and ∆(Y, Y J ) := δ(yi , yiJ ). (3.13)
i=1 i=1

Then the following two losses are equivalent:

Σ
m
max

w, φ(xi , yJ ) − φ(xi , yi ) + δ(yi , yJ ) and max′ w, Φ(X, Y J ) − Φ(X, Y ) + ∆(Y, Y J ).
y Y
i=1

This is immediately obvious, since both feature map and loss decompose,
which allows us to perform maximization over Y J by maximizing each of its m
components. In doing so, we showed that aggregating all data and labels into
a single feature map and loss yields results identical to minimizingthe sum
over all individual losses. This holds, in particular, for the sample error loss of
[Joa05]. Also note that this equivalence does not hold whenever Γ(y, yJ) is not
constant.

A3.1.2.1 Intractable Models


We now discuss cases where computing l(x, y, w) itself is too expensive. For
Σ
instance, for intractable graphical models, the computation of y exp ⟨ w, φ(x,
y)⟩ cannot be computed efficiently. [WJ03] propose the use of a convex
majoriza- tion of the log-partition function in those cases. In our setting this
means that instead of dealing with
Σ
l(x, y, w) = g(w|x) − ⟨ w, φ(x, y)⟩ where g(w|x) := log exp ⟨ w, φ(x, y)⟩
y
(3.14)

one uses a more easily computable convex upper bound on g via

sup ⟨ w, µ⟩ + HGauss (µ|x). (3.15)


µ∈ MARG(x)

Here MARG(x) is an outer bound on the conditional marginal polytope


associated with the map φ(x, y). Moreover, HGauss(µ|x) is an upper bound
on the entropy by using a Gaussian with identical variance. More refined tree
decompositions exist, too. The key benefit of our approach is that the
solution µ of the optimization problem (3.15) can immediately be used as a
gradient of the upper bound. This is computationally rather efficient.
A3.1 Loss Functions 209

Likewise note that [TGK04] use relaxations when solving structured esti-
mation problems of the form
Γ(y, y ) w, φ(x, y ) − φ(x, y) + ∆(y, y ),
J J J
l(x, y, w) = max

(3.16)
y

J
by enlarging the domain of maximization with respect to y . For instance,
instead of an integer programming problem we might relax the setting to
a linear program which is much cheaper to solve. This, again, provides an
upper bound on the original loss function.
In summary, we have demonstrated that convex relaxation strategies are
well applicable for bundle methods. In fact, the results of the corresponding
optimization procedures can be used directly for further optimization steps.

A3.1.3 Scalar Multivariate Performance Scores


We now discuss a series of structured loss functions and how they can be
implemented efficiently. For the sake of completeness, we give a concise rep-
resentation of previous work on multivariate performance scores and ranking
methods. All these loss functions rely on having access to ⟨ w, x⟩ , which can
be computed efficiently by using the same operations as in Section A3.1.1.

A3.1.3.1 ROC Score


Denote by f = Xw the vector of function values on the training set. It is
well known that the area under the ROC curve is given by
1 Σ
AUC(x, y, w) = I(⟨ w, xi ⟩ < ⟨ w,j x ⟩ ), (3.17)
m+m− yi <yj

where m+ and m− are the numbers of positive and negative observations


respectively, and I(·) is indicator function. Directly optimizing the cost 1 −
AUC(x, y, w) is difficult as it is not continuous in w. By using max(0, 1 +
⟨ w, xi − xj⟩ ) as the surrogate loss function for all pairs (i, j) for which yi < yj
we have the following convex multivariate empirical risk
Σ Σ
R (w) = 1 max(0, 1 + ⟨ w, — x ⟩ ) = 1 max(0, 1 + f — f ).
i j
x
emp i j
m+m− yi <y j
m+m− yi <y j
(3.18)

Obviously, we could compute Remp(w) and its derivative by an O(m2) op-


eration. However [Joa05] showed that both can be computed in O(m log m) time
using a sorting operation, which we now describe.
Denote by c = f − 12y an auxiliary variable and let i and j be indices such
210 3 Loss Functions

Algorithm 3.2 ROCScore(X, y, w)


1: input: Feature matrix X, labels y, and weight vector w
1
2: initialization: s− = m− and s+ = 0 and l = 0m and c = Xw − y
2
3: π ← {1, . . . , m} sorted in ascending order of c
4: for i = 1 to m do
5: if yπi = −1 then
6: lπi ← s+ and s− ← s− − 1
7: else
8: lπi ← −s− and s+ ← s+ + 1
9: end if
10: end for
11: Rescale l ← l/(m+ m− ) and compute r = ⟨ l, c⟩ and g = l X.
T

12: return Risk r and subgradient g

that yi = −1 and yj = 1. It follows that ci − cj = 1 + fi − fj. The efficient


algorithm is due to the observation that there are at most m distinct terms
ck, k = 1, . . . , m, each with different frequency lk and sign, appear in (3.18).
These frequencies lk can be determined by first sorting c in ascending order
then scanning through the labels according to the sorted order of c and
keeping running statistics such as the number s− of negative labels yet to
encounter, and the number s+ of positive labels encountered. When visiting
yk, we know ck should appears s+ (or s− ) times with positive (or negative)
sign in (3.18) if yk = −1 (or yk = 1). Algorithm 3.2 spells out explicitly how
to compute Remp(w) and its subgradient.

A3.1.3.2 Ordinal Regression


Essentially the same preference relationships need to hold for ordinal re-
gression. The only difference is that yi need not take on binary values any
more. Instead, we may have an arbitrary number of different values yi (e.g.,
1 corresponding to ’strong reject’ up to 10 corresponding to ’strong accept’,
when it comes to ranking papers for a conference). That is, we now have
yi ∈ {1, . . . , n} rather than yi ∈ {±1}. Our goal is to find some w such that
⟨ w, xi − xj⟩ < 0 whenever yi < yj. Whenever this relationship is not satis-
fied, we incur a cost C(yi, yj) for preferring x i to x j. For examples, C(yi, yj)
could be constant i.e., C(yi, yj) = 1 [Joa06] or linear i.e., C(yi, yj) = yj − yi.
Denote by mi the number of xj for which yj = i. In this case, there are
Σ
M̄ = m2 − ni=1 m2i pairs (yi , yj ) for which yi /= yj ; this implies that there
are M = M̄ /2 pairs (yi , yj ) such that yi < yj . Normalizing by the total
A3.1 Loss Functions 211

number of comparisons we may write the overall cost of the estimator as

" #
1 Σ Σ
n
1
C(yi , yj )I(⟨ w, xi ⟩ > ⟨ w, xj ⟩ ) where M m2 − m2 . (3.19)
M y <y =
i j i
2 i

Using the same convex majorization as above when we were maximizing the
ROC score, we obtain an empirical risk of the form

1 Σ
Remp (w) = C(y , iy )j max(0, 1 + ⟨ w, i — xj ⟩ ) (3.20)
M
x
yi <y j

Now the goal is to find an efficient algorithm for obtaining the number of
times when the individual losses are nonzero such as to compute both the
value and the gradient of Remp(w). The complication arises from the fact
that observations xi with label yi may appear in either side of the inequality
depending on whether yj < yi or yj > y i. This problem can be solved as
follows: sort f = Xw in ascending order and traverse it while keeping track
of how many items with a lower value yj are no more than 1 apart in terms
of their value of fi. This way we may compute the count statistics efficiently.
Algorithm 3.3 describes the details, generalizing the results of [Joa06]. Again,
its runtime is O(m log m), thus allowing for efficient computation.

A3.1.3.3 Preference Relations

In general, our loss may be described by means of a set of preference relations


j ≥ i for arbitrary pairs (i, j) ∈ {1, . . . m}2 associated with a cost C(i, j)
which is incurred whenever i is ranked above j. This set of preferences may
or may not form a partial or a total order on the domain of all observations.
In these cases efficient computations along the lines of Algorithm 3.3 exist. In
general, this is not the case and we need to rely on the fact that the setP
containing all preferences is sufficiently small that it can be enumerated
efficiently. The risk is then given by

1 Σ
C(i, j)I(⟨ w, x i ⟩ > ⟨ w,jx ⟩ ) (3.21)
|P |
(i,j)∈ P
212 3 Loss Functions

Algorithm 3.3 OrdinalRegression(X, y, w, C)


1: input: Feature matrix X, labels y, weight vector w, and score matrix C
2: initialization: l = 0n and ui = mi ∀i ∈ [n] and r = 0 and g = 0m
1 2m
3: Compute f = Xw and set c = [f − , f + ] ∈ R
1
(concatenate the
2 2
vectors)
Σn
4: Compute M = (m −
2 2
i=1 mi )/2

5: Rescale C ← C/M
6: π ← {1, . . . , 2m} sorted in ascending order of c
7: for i = 1 to 2m do
8: j = πi mod m
9: if πi ≤ m then
10: for k = 1 to yj − 1 do
11: r ← r − C(k, yj)ukcj
12: gj ← gj − C(k, yj )u k
13: end for
14: lyj ← lyj + 1
15: else
16: for k = yj + 1 to n do
17: r ← r + C(yj, k)lkcj+m
18: gj ← gj + C(yj , k)lk
19: end for
20: uyj ← uyj − 1
21: end if
22: end for
23: g ← g X
T

24: return: Risk r and subgradient g

Again, the same majorization argument as before allows us to write a convex


upper bound

1 Σ
R emp (w) = |P | C(i, j) max (0, 1 + ⟨ w, x ⟩ − ⟨ w, x ⟩ ) (3.22)
(i,j)∈ P i j

(
1 Σ 0 if ⟨ w, xj − xi ⟩ ≥ 1
where ∂w Remp (w) = C(i, j)
|P | xi − xj otherwise
(i,j)∈ P
(3.23)

The implementation is straightforward, as given in Algorithm 3.4.


A3.1 Loss Functions 213

Algorithm 3.4 Preference(X, w, C, P )


1: input: Feature matrix X, weight vector w, score matrix C, and prefer-
ence set P
2: initialization: r = 0 and g = 0m
3: Compute f = Xw
4: while (i, j) ∈ P do
5: if fj − fi < 1 then
6: r ← r + C(i, j)(1 + fi − fj)
7: gi ← gi + C(i, j) and gj ← gj − C(i, j)
8: end if
9: end while
10: g ← g X
T

11: return Risk r and subgradient g

A3.1.3.4 Ranking
In webpage and document ranking we are often in a situation similar to that
described in Section A3.1.3.2, however with the difference that we do not
only care about objects xi being ranked according to scores yi but moreover
that different degrees of importance are placed on different documents.
The information retrieval literature is full with a large number of differ- ent
scoring functions. Examples are criteria such as Normalized Discounted
Cumulative Gain (NDCG), Mean Reciprocal Rank (MRR), Precision@n, or
Expected Rank Utility (ERU). They are used to address the issue of evaluat- ing
rankers, search engines or recommender sytems [Voo01, JK02, BHK98,
BH04]. For instance, in webpage ranking only the first k retrieved docu- ments
that matter, since users are unlikely to look beyond the first k, say10,
retrieved webpages in an internet search. [LS07] show that these scores can
be optimized directly by minimizing the following loss:
Σ
l(X, y, w) = max ci w, xπ(i) − xi + ⟨ a − a(π), b(y)⟩ . (3.24)
π
i

Here ci is a monotonically decreasing sequence, the documents are assumed


to be arranged in order of decreasing relevance, π is a permutation, the
vectors a and b(y) depend on the choice of a particular ranking measure, and
a(π) denotes the permutation of a according to π. Pre-computing f = Xw
we may rewrite (3.24) as
h i
l(f, y) = max cT f (π) − a(π)T b(y) − cT f + aT b(y) (3.25)
π
214 3 Loss Functions

Algorithm 3.5 Ranking(X, y, w)


1: input: Feature matrix X, relevances y, and weight vector w
2: Compute vectors a and b(y) according to some ranking measure
3: Compute f = Xw
4: Compute elements of matrix Cij = ci fj − b i aj
5: π = LinearAssignment(C)
6: r = c (f (π) − f ) + (a − a(π)) b
T T

7: g = c(π ) − c and g ← g X
−1 T

8: return Risk r and subgradient g

and consequently the derivative of l(X, y, w) with respect to w is given by


∂w l(X, y, w) = (c(π̄− 1 ) − c)T X where π̄ = argmax cT f (π) − a(π)T b(y).
π
(3.26)
−1 −1
Here π denotes the inverse permutation, such that π◦ π = 1. Finding the
permutation maximizing c f (π) − a(π) b(y) is a linear assignment problem
T T

which can be easily solved by the Hungarian Marriage algorithm, that is,
the Kuhn-Munkres algorithm.
The original papers by [Kuh55] and [Mun57] implied an algorithm with
O(m3) cost in the number of terms. Later, [Kar80] suggested an algorithm with
expected quadratic time in the size of the assignment problem (ignor- ing log-
factors). Finally, [OL93] propose a linear time algorithm for large problems.
Since in our case the number of pages is fairly small (in the order of 50 to 200
per query ) the scaling behavior per query is not too important.We used an
existing implementation due to [JV87].
Note also that training sets consist of a collection of ranking problems, that
is, we have several ranking problems of size 50 to 200. By means of
parallelization we are able to distribute the work onto a cluster of worksta-
tions, which is able to overcome the issue of the rather costly computation
per collection of queries. Algorithm 3.5 spells out the steps in detail.

A3.1.3.5 Contingency Table Scores


[Joa05] observed that Fβ scores and related quantities dependent on a con-
tingency table can also be computed efficiently by means of structured es-
timation. Such scores depend in general on the number of true and false
positives and negatives alike. Algorithm 3.6 shows how a corresponding em-
pirical risk and subgradient can be computed efficiently. As with the pre-
vious losses, here again we use convex majorization to obtain a tractable
optimization problem.
A3.1 Loss Functions 215
J
Given a set of labels y and an estimate y , the numbers of true positives
(T+), true negatives (T− ), false positives (F+), and false negatives (F− ) are
determined according to a contingency table as follows:

y>0 y<0
J
y >0 T+ F+
J
y <0 F− T−

In the sequel, we denote by m+ = T+ + F− and m− = T− + F+ the numbers of


positives and negative labels in y, respectively. We note that Fβ score can be
computed based on the contingency table [Joa05] as
(1 + β2)T+
Fβ(T+, T− ) = . (3.27)
T + + m− — T− + β2m+

If we want to use ⟨ w, xi⟩ to estimate the label of observation xi, we may


usethe following structured loss to “directly” optimize w.r.t. Fβ score
h i
[Joa05]: l(X, y,′ w) = max (yJ − y)T f + ∆(T+ , T− ) , (3.28)
y

where f = Xw, ∆(T+, T− ) := 1 − Fβ(T+, T− ), and (T+, T− ) is determinedby


using y and yJ. Since ∆ does not depend on the specific choice of (y, yJ)but
rather just on which sets they disagree, l can be maximized as follows:
Enumerating all possible m+m− contingency tables in a way such that given a
configuration (T+, T− ), T+ (T− ) positive (negative) observations xi with
largest (lowest) value of ⟨ w, xi⟩ are labeled as positive (negative). This is
effectively implemented as a nested loop hence run in O(m2 ) time. Algorithm
3.6 describes the procedure in details.

A3.1.4 Vector Loss Functions


Next we discuss “vector” loss functions, i.e., functions where w is best de-
scribed as a matrix (denoted by W ) and the loss depends on Wx. Here, we
have feature vector x ∈ Rd, label y ∈ Rk, and weight matrix W ∈ Rd× k. We also
denote feature matrix X ∈ Rm× d as a matrix of m feature vectors xi, and stack
up the columns Wi of W as a vector w.
Some of the most relevant cases are multiclass classification using both the
exponential families model and structured estimation, hierarchical mod-els,
i.e., ontologies, and multivariate regression. Many of those cases are
summarized in Table A3.1.
216 3 Loss Functions

Algorithm 3.6 Fβ(X, y, w)


1: input: Feature matrix X, labels y, and weight vector w
2: Compute f = Xw
3: π − ← {i : yi = 1} sorted in descending order of f
+
4: π ← {i : yi = −1} sorted in ascending order of f
Σ m+
5: Let p0 = 0 and pi = 2 f k+ , i = 1, . . . , m+
k=i π
k=i
Σ m−
6: Let n0 = 0 and ni = 2 f − , i = 1, . . . , m−
7: yJ ← −y and r ← −∞ πk
8: for i = 0 to m+ do
9: for j = 0 to m− do
10: rtmp = ∆(i, j) − pi + nj
11: if rtmp > r then
12: r ← rtmp
13: T+ ← i and T− ← j
14: end if
15: end for
16: end for
17: y + ← 1, i = 1, . . . , T+
J
π i
18: yπJ − ← −1, i = 1, . . . , T−
i
g ← (y − y) X
J T
19:
20: return Risk r and subgradient g

A3.1.4.1 Unstructured Setting


The simplest loss is multivariate regression, where l(x, y, W ) = 12(y−xT W )T M (y−
xT W ). In this case it is clear that by pre-computing XW subsequent calcu-
lations of the loss and its gradient are significantly accelerated.
A second class of important losses is given by plain multiclass classification
problems, e.g., recognizing digits of a postal code or categorizing high-level
document categories. In this case, φ(x, y) is best represented by ey ⊗x (using
a linear model). Clearly we may view ⟨ w, φ(x, y)⟩ as an operation which
chooses a column indexed by y from xW , since all labels y correspond to
a different weight vector Wy. Formally we set ⟨ w, φ(x, y)⟩ = [xW ]y. In this
case, structured estimation losses can be rewritten as

l(x, y, W ) = max

Γ(y, yJ) Wy′ − Wy, x + ∆(y, yJ) (3.29)
y

and ∂ W l(x, y, W ) = Γ(y, y∗)(ey∗ − ey) ⊗ x. (3.30)

Here Γ and ∆ are defined as in Section A3.1.2 and y∗ denotes the value of yJ
A3.1 Loss Functions 217

for which the RHS of (3.29) is maximized. This means that for unstructured
multiclass settings we may simply compute xW . Since this needs to be per-
formed for all observations xi we may take advantage of fast linear algebra
routines and compute f = XW for efficiency. Likewise note that comput-
ing the gradient over m observations is now a matrix-matrix multiplication,
too: denote by G the matrix of rows of gradients Γ(yi , yi∗ )(eyi∗ − eyi ). Then
∂W Remp (X, y, W ) = GT X. Note that G is very sparse with at most two
T
nonzero entries per row, which makes the computation of G X essentially as
expensive as two matrix vector multiplications. Whenever we have many
classes, this may yield significant computational gains.
Log-likelihood scores of exponential families share similar expansions. We
have
Σ Σ
l(x, y, W ) = log exp w, φ(x, yJ) − ⟨ w, φ(x, y)⟩ = log exp Wy′ , x − ⟨ Wy, x⟩
y′ y′
(3.31)
Σ
y′ (ey′ ⊗ x) exp Wy′ , x
∂W l(x, y, W ) = Σ − ey ⊗ x. (3.32)
y′ exp Wy ′ , x

The main difference to the soft-margin setting is that the gradients are
not sparse in the number of classes. This means that the computation of
gradients is slightly more costly.

A3.1.4.2 Ontologies

Fig. A3.1. Two ontologies. Left: a binary hierarchy with internal nodes 1,{ . . . , 7 }
and labels {8, . . . 15 }. Right: a generic directed acyclic graph with internal nodes
{ . . . , 6, 12 and
1, } labels 7, . {. . , 11, 13, . . . , 15 . Note
} that node 5 has two parents,
namely nodes 2 and 3. Moreover, the labels need not be found at the same level of
the tree: nodes 14 and 15 are one level lower than the rest of the nodes.

Assume that the labels we want to estimate can be found to belong to


a directed acyclic graph. For instance, this may be a gene-ontology graph
218 3 Loss Functions

[ABB+ 00] a patent hierarchy [CH04], or a genealogy. In these cases we have a


hierarchy of categories to which an element x may belong. Figure A3.1 gives
two examples of such directed acyclic graphs (DAG). The first example is
a binary tree, while the second contains nodes with different numbers of
children (e.g., node 4 and 12), nodes at different levels having children (e.g.,
nodes 5 and 12), and nodes which have more than one parent (e.g., node 5).
It is a well known fundamental property of trees that they have at most as
many internal nodes as they have leaf nodes.
It is now our goal to build a classifier which is able to categorize observa-
tions according to which leaf node they belong to (each leaf node is assigned
a label y). Denote by k + 1 the number of nodes in the DAG including the root
node. In this case we may design a feature map φ(y) ∈ Rk [CH04] by
associating with every label y the vector describing the path from the root
node to y, ignoring the root node itself. For instance, for the first DAG in
Figure A3.1 we have
φ(8) = (1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0) and φ(13) = (0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0)
Whenever several paths are admissible, as in the right DAG of Figure A3.1
we average over all possible paths. For example, we have
φ(10) = (0.5, 0.5, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0) and φ(15) = (0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1).
Also note that the lengths of the paths need not be the same (e.g., to
reach 15 it takes a longer path than to reach 13). Likewise, it is natural to
J J
assume that ∆(y, y ), i.e., the cost for mislabeling y as y will depend on the
similarity of the path. In other words, it is likely that the cost for placing
x into the wrong sub-sub-category is less than getting the main category of
the object wrong.
To complete the setting, note that for φ(x, y) = φ(y) ⊗ x the cost of computing
all labels is k inner products, since the value of ⟨ w, φ(x, y)⟩ for a particular y can
be obtained by the sum of the contributions for the segmentsof the path. This
means that the values for all terms can be computed bya simple breadth
first traversal through the graph. As before, we may make use of
vectorization in our approach, since we may compute xW ∈ Rk to
obtain the contributions on all segments of the DAG before performing the
graph traversal. Since we have m patterns xi we may vectorize matters by
pre-computing XW .
Also note that φ(y)−φ(yJ) is nonzero only for those edges where the paths
for y and yJ differ. Hence we only change weights on those parts of the graph
where the categorization differs. Algorithm 3.7 describes the subgradient and
loss computation for the soft-margin type of loss function.
219

Algorithm 3.7 Ontology(X, y, W )


m× d
1: input: Feature matrix X ∈ R , labels y, and weight matrix W ∈
d× k
R
m× k
2: initialization: G = 0 ∈ R and r = 0
3: Compute f = XW and let fi = xi W
4: for i = 1 to m do
5: Let Di be the DAG with edges annotated with the values of fi
6: Traverse Di to find node y∗ that maximize sum of fi values on the
J
path plus ∆(yi, y )
Gi = φ(y ) − φ(yi)

7:
8: r ← r + zy∗ − zyi
9: end for
T
10: g = G X
11: return Risk r and subgradient g

The same reasoning applies to estimation when using an exponential fam-


ilies model. The only difference is that we need to compute a soft-max
over paths rather than exclusively choosing the best path over the ontol- ogy.
Again, a breadth-first recursion suffices: each of the leaves y of the DAG is
associated with a probability p(y|x). To obtain E y∼p(y|x) [φ(y)] all we need
to do is perform a bottom-up traversal of the DAG summing over all
probability weights on the path. Wherever a node has more than one parent,
we distribute the probability weight equally over its parents.
219

You might also like