cs188 sp23 Note25
cs188 sp23 Note25
cs188 sp23 Note25
Non-linear Separators
We know how to construct a model that learns a linear boundary for binary classification tasks. This is a
powerful technique, and one that works well when the underlying optimal decision boundary is itself linear.
However, many practical problems involve the need for decision boundaries that are nonlinear in nature, and
our linear perceptron model isn’t expressive enough to capture this relationship.
Consider the following set of data:
We would like to separate the two colors, and clearly there is no way this can be done in a single dimension
(a single dimensional decision boundary would be a point, separating the axis into two regions).
To fix this problem, we can add additional (potentially nonlinear) features to construct a decision boundary
from. Consider the same dataset with the addition of x2 as a feature:
With this additional piece of information, we are now able to construct a linear separator in the two di-
mensional space containing the points. In this case, we were able to fix the problem by mapping our data
to a higher dimensional space by manually adding useful features to data points. However, in many high-
dimensional problems, such as image classification, manually selecting features that are useful is a tedious
problem. This requires domain-specific effort and expertise, and works against the goal of generalization
across tasks. A natural desire is to learn these featurization or transformation functions as well, perhaps
using a nonlinear function class that is capable of representing a wider variety of functions.
With this additional structure and weights, we can express a much wider set of functions.
By increasing the complexity of our model, we in turn greatly increase its expressive power. Multi-layer
perceptrons give us a generic way to represent a much wider set of functions. In fact, a multi-layer percep-
tron is a universal function approximator and can represent any real function, leaving us only with the
problem of selecting the best set of weights to parameterize our network. This is formally stated below:
Theorem. (Universal Function Approximators) A two-layer neural network with a sufficient number
of neurons can approximate any continuous function to any desired accuracy.
This expression denotes the likelihood of a particular set of weights explaining the observed labels and data
points. We would like to find the set of weights that maximizes this quantity. This is identical to finding the
maximum of the log-likelihood expression (since log is a monotone function, the maximizer of one will be
the maximizer of the other):
n n
w) = log ∏ P(yi |xi ; w ) = ∑ log P(yi |f(xi ); w).
log ℓ(w
i=1 i=1
Depending on the application, the formulation as a sum of log probabilities may be more useful. In the
case where the log-likelihood is differentiable with respect to the weights, we can optimize it using gradient
ascent.
0.5
−0.5
−1
−2 −1 0 1 2
This is difficult to optimize for a number of reasons. Firstly, it is not continuous, and secondly, it has a
derivative of zero at all points. Intuitively, this means that we cannot know in which direction to look for a
local minima of the function, which makes it difficult to minimize loss in a smooth way.
Instead of using a step function like above, a better solution is to select a continuous function. We have
many options for such a function, including the sigmoid function (named for the Greek σ or ’s’ as it looks
like an ’s’) as well as the rectified linear unit (ReLU). Let’s look at their definitions and graphs below:
1
Sigmoid: σ (x) = 1+e−x
0.8
0.6
0.4
0.2
0
−10 −5 0 5 10
(
0 if x < 0
ReLU: f (x) =
x if x ≥ 0
1.5
0.5
0
−2 −1 0 1 2
Calculating the output of a multi-layer perceptron is done as before, with the difference that at the output
of each layer we now apply one of our new non-linearities (chosen as part of the architecture for the neural
network) instead of the initial indicator function. In practice, the choice of nonlinearity is a design choice
that typically requires some experimentation to select a good one for each individual use case.
Now we can find the optimal values of the parameters using the gradient ascent method described earlier.
Given that the datasets are usually large, batch gradient ascent is the most popular variation of gradient
ascent in neural network optimization.
g f
+ ∗
y
3
z
4
∂f ∂ f ∂ x1 ∂ f ∂ x2 ∂ f ∂ xn
= · + · +...+ · .
∂ti ∂ x1 ∂ti ∂ x2 ∂ti ∂ xn ∂ti
In the context of computation graphs, this means that to compute the gradient of a given node ti with respect
to the output f , we take a sum of children(ti ) terms.
x
2 2→
←4
g f
5→ ∗ 20→
+
←4 ←1
y 3→
←4
3
← →
4
5
z
4
Figure 1 shows an example computation graph for computing (x + y) ∗ z with the values x = 2, y = 3, z = 4.
∂f
1. Since f is our final node, it has gradient ∂ f = 1. Then we compute the gradients for its children, g
∂f
and z. We have ∂g = ∂
∂ g (g · z) = z = 4, and ∂∂ zf = ∂∂z (g · z) = g = 5.
2. Now we can move on upstream to compute the gradients of x and y. For these, we’ll use the chain
rule and reuse the gradient we just computed for g, ∂∂ gf .
∂f ∂ f ∂g
3. For x, we have ∂x = ∂g ∂x by the chain rule – the product of the gradient coming from g with the
partial derivative for x at this node. We have ∂∂ gx = ∂∂x (x + y) = ∂∂x x + ∂∂x y = 1 + 0, so ∂∂ xf = 4 · 1 = 4.
Intuitively, the amount that a change in x contributes to a change in f is the product of the amount that
a change in g contributes to a change in f , with the amount that a change in x contributes to one in g.
4. The process for computing the gradient of the output with respect to y is almost identical. For y
we have ∂∂ yf = ∂∂ gf ∂∂ gy by the chain rule – the product of the gradient coming from g with the partial
∂g ∂ ∂ ∂ ∂f
derivative for y at this node. We have ∂y = ∂ y (x + y) = ∂yx + ∂yy = 0 + 1, so ∂y = 4 · 1 = 4.
Since the backward pass step for a node in general depends on the node’s inputs (which are computed in the
forward pass), and gradients computed “downstream” of the current node by the node’s children (computed
earlier in the backward pass), we cache all of these values in the graph for efficiency. Taken together, the
forward and backward pass over the graph make up the backpropagation algorithm.
For an example of applying the chain rule for a node with multiple children, consider the graph in Figure 2,
representing ((x + y) + (x · y)) · z, with x = 2, y = 3, z = 4. x and y are each used in 2 operations, and so each
has two children. By the chain rule, their gradient values are the sum of the gradients computed for them
by their children (i.e. gradient values add at path junctions). For example, to compute the gradient for x, we
have
∂f ∂ f ∂h ∂ f ∂i
= + = 4 · 1 + 4 · 3 = 4 + 12 = 16.
∂x ∂h ∂x ∂i ∂x
Now that we have a method for computing gradients for all parameters of the network, we can use gradient
descent methods to optimize the parameters to get high accuracy on our training data. For example, suppose
5→ 4
g f
←4
←
+ 11→ ∗ 44→
←4 ←1
←1
← →
y
6
4
i
2
3→
← →
3 ∗
11
z
4
←12 ←8
4
we have designed some classification network to output probabilities of classes y for data points x, and have
m different training data points (see the Measuring Accuracy section for more on this). Let w be all the
parameters of our network. We want to find values for the parameters w that maximize the likelihood of the
true class probabilities for our data, so we have the following function to run gradient ascent on:
m m
ℓℓ(w) = log ∏ P(y(i) | x(i) ; w) = ∑ log P(y(i) | x(i) ; w)
i=1 i=1
where x(1) , . . . , x(m) are the m data points in our training set.
To minimize the negative of log-likelihood is, at each iteration of gradient descent, we use the data points
x(1) , . . . , x(m) to compute gradients for the parameters w, update the parameters, and repeat until the param-
eters converge (at which point we’ve reached a local minimum of the function).
Neural networks are powerful (and universal!) function approximators, but can be difficult to design and
train. There’s a lot of ongoing research in deep learning focusing on various aspects of neural network
design such as:
1. Network Architectures - designing a network (choosing activation functions, number of layers, etc.)
that’s a good fit for a particular problem
2. Learning Algorithms - how to find parameters that achieve a low value of the loss function, a difficult
problem since gradient descent is a greedy algorithm and neural nets can have many local optima
3. Generalization and Transfer Learning - since neural nets have many parameters, it’s often easy to
overfit training data - how do you guarantee that they also have low loss on testing data you haven’t
seen before?
Summary
In this note, we introduced logistic regression and its close variant, multiclass logistic regression.
We also examined how to solve optimization problems using gradient methods.
Combining these concepts, we come to the idea of neural networks, which are essentially multi-layer per-
ceptrons with nonlinear activations between. These networks are extremely powerful for approximating
functions, and are trained using gradient descent.