Lec 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CS 182/282A: Designing/Visualizing and Understanding Deep Neural Networks

Lecture 2: Machine Learning Review


23 January 2023
Lecturer: Professor Anant Sahai Scribe: Arm Wonghirundacha

We will cover the standard optimization-based paradigm for supervised learning.

1 The ingredients
The primary ingredients in a standard optimization-based supervised learning problem are:

• Data: (xi , yi ) pairs where xi is the input/covariates and yi is label/output and the
index i = 1, . . . , n where n is the size of the training data.

• Model: fθ (·) for parameters θ

• Loss Function: ℓ(yi , fθ (xi )) which returns a real number

• Optimization Algorithm

We will now expand on these ingredients and highlight complications which arise along with
the standard solutions to address them.

2 The model
Training the model means choosing the parameters θ. We do so by using empirical risk
minimization. One example of this is to choose θ which minimizes average loss:
n
1!
θ̂ = argmin ℓtrain (yi , fθ (xi )) (1)
θ n i=1
For our setup, model performance is based on the average loss evaluated on our data, how-
ever this is not reflective of our true goal, which is to ensure the model performs well when
deployed into the real world since real world data could look different to our training data.
Because of this, we must have proxies to help us get a better understanding of the true
performance of our model.

A mathematical proxy is given when we make an assumption about the probability dis-
tribution P (X, Y ) of the underlying data, that is, we assume what the data in the real
world would look like. With this distribution assumption, we can evaluate the expected loss:
EX,Y [ℓ(Y, fθ (X)] which we want to be as low as possible. Making such an assumption about

1
the underlying distribution causes a few complications.

Complication 1: We have no access to P (X, Y ), any probability distribution we would use


comes with many strong assumptions about the real world.
Solution: A standard way to tackle such a complication is to partition the initial data
we collected into a train and test set which we define as (xtest,i , ytest,i )ni=1
test
pairs. We withhold
the test data during the model training and then use the test data to observe the test error,
defined as: ntest
1 !
ℓtrue (ytest,i , fθ̂ (xtest,i )) (2)
ntest i=1
Test error should be a faithful representation of real world, so we should refrain from using
test data until our model training is complete, otherwise we would get caught in a feedback
loop and have to deal with additional complications of model overfitting.

3 Loss Functions
A loss function maps a set of values to a real number which in theory should reflect some sort
of cost associated with event we are trying to model. There are many different loss functions
which we may have seen in previous coursework including: for binary classifiers, hinge loss
or logistic loss and for multi-class classifiers cross-entropy loss. With many different loss
functions in mind, we must decide on which loss function is the best for our use case. This
loss function decision comes with some complications.

Complication 2: Our loss function ℓtrue (·, ·) that we actually care about is incompatible
with our optimizer. e.g. Our loss function is non differentiable but the optimizer requires
its derivatives.
Solution: Use a surrogate loss, ℓtrain (·, ·), that satisfies the conditions of the optimizer
to train our model, but still evaluate the performance of our model, that is, calculate test
error, with ℓtrue (·, ·) since we no longer have to deal with the constraint of the optimizer.
e.g. y ∈ {cat, dog} where ℓtrue : Hamming Loss, but we can’t take derivatives of cat and
dog, so we use a surrogate loss: y → R where the training data becomes cat → −1 and
dog → +1, which is " now differentiable when we " choose ℓtrain : squared error loss.
A side note: n1 ni=1 ℓtrain (yi , fθ̂ (xi )) and n1 ni=1 ℓtrue (yi , fθ̂ (xi )) are two different quan-
tities since we are finding the error using different loss functions. You might ask, why do we
need to find the training error using ℓtrue , when we can just evaluate the model using test
data instead? We use this as a debugging method since evaluating the training error using
the true loss function helps us understand if the surrogate loss is doing an adequate job in
training our model.

2
4 Overfitting and Hyperparameters
4.1 Overfitting
Complication 3: We get ‘crazy’ values for θ̂ and/or we get really bad test performance.
One reason this could happen is due to model overfitting.
Solution: Add an explicit regularizer during training, that is,
n
1!
θ̂ = argmin ℓtrain (yi , fθ (xi )) + R(θ) (3)
θ n i=1
where R(θ) = λ$θ$2 is an example of a regularizer we could choose and is known as ridge
regularization.
Side note: An alternative solution is to simplify your model or reduce the model order (e.g.
the depth of the model). The suggestion to simplify models is often suggested in statistics,
however it is uncommon in deep learning to simplify models when crazy values of θ̂ arise
from model training.

4.2 Hyperparameter tuning


By adding a regularization term, we notice the addition of a parameter λ, which raises the
question, how do we choose λ? "
The Naive approach: θ̂ = argmin n1 ni=1 ℓ(yi , fθ (xi )) + λ$θ$2 but this doesn’t work well
θ,λ
because we can optimise this equation by assigning an absurd value to λ (e.g. 0 or -inf)
This is where we let λ be a hyperparameter, where a hyperparameter can be described as a
parameter ‘that if you let the optimizer deal with, it will go crazy’ and let θ be treated as a
normal parameter.
Solution: Separate the parameters from the hyperparameters. Then withhold some data
from our training set to create a validation set, which we can use specifically to optimise
hyperparameters. An example of how to partition the data is given in Figure 1. We can
optimize hyperparameters using methods such as gradient descent or by performing a brute-
force grid search.
Further Complication: The optimizer could also contain hyperparameters such as the
learning rate or step size η in gradient descent.

5 Optimization Algorithm
There are many different optimization algorithms when it comes to minimizing a loss func-
tion. A commonly used optimization algorithm is Gradient Descent. Gradient descent is
an iterative optimization algorithm which changes the parameter of interest θ a little bit at
a time by looking at the local neighborhood of loss around θt . We look at this neighborhood
by taking the first order Taylor expansion: Ltrain (θt + ∆θ) ≈ Ltrain (θt ) + ∂θ

Ltrain |θt ∆θ. We

3
Figure 1: Data partitions for fitting parameters and hyperparameters

want to move in such a way that maximizes the change of the loss (Ltrain (θt + ∆θ)) so we
must match it. That is, move in the negative direction of the gradient. So gradient descent
updates θ by the following:

θt+1 = θt + η (−∇θ Ltrain,θ ) (4)


"n
where η is the learning rate, and an example of Ltrain,θ = n1 i=1 ℓ(yi , fθ (xi )) + R(θ). So we
see that for the update, we are taking small steps in the opposite direction of the gradient
to reach the minima.

Figure 2: Example of the iterative gradient descent update

We can interpret Eq. 4 as a dynamical system in discrete time, which means we can ask

4
questions about the stability of the system. In our case, η controls the stability of our sys-
tem, where if η is too large, the dynamics become unstable (we could possibly diverge), but
if η is too small, practically speaking, it could take too long to reach the minima.

Time Consideration: There are different interpretations for time when training models.
On one hand we could look at how many training iterations have passed, or how much data
we have ingested (e.g. Epochs). Another important time consideration is Wall Clock time,
which is simply the amount of real world time being used in training. When training our
model, both interpretations of time must be used since we want to train our model on enough
iterations, while not using too much wall clock time (depending on our budget).

You might also like