05 Regression Least Squares

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CIS 520: Machine Learning Spring 2021: Lecture 5

Least Squares Regression

Lecturer: Shivani Agarwal

Disclaimer: These notes are designed to be a supplement to the lecture. They


may or may not cover all the material discussed in the lecture (and vice versa).

Outline
• Regression and conditional expectation
• Linear least squares regression
• Ridge regression and Lasso
• Probabilistic view

1 Regression and Conditional Expectation

In this lecture we consider regression problems, where there is an instance space X as before, but labels
and predictions are real-valued: Y = Yb = R (such as in a weather forecasting problem, where instances
might be satellite images showing water vapor in some region and labels/predictions might be the amount of
rainfall in the coming week, or in a stock price prediction problem, where instances might be feature vectors
describing properties of stocks and labels/predictions might be the stock price after some time period). Here
one is given a training sample S = ((x1 , y1 ), . . . , (xm , ym )) ∈ (X × R)m , and the goal is to learn from S a
regression model fS : X →R that predicts accurately labels of new instances in X .
What should count as a good regression model? In other words, how should we measure the performance of a
regression model? A widely used performance measure involves the squared loss function, `sq : R×R→R+ ,
defined as
y − y)2 .
`sq (y, yb) = (b
The loss of a model f : X →R on an example (x, y) is measured by `sq (y, f (x)) = (f (x) − y)2 . Assuming
examples are drawn from some joint probability distribution D on X × R, the squared-loss generalization
error of f : X →R w.r.t. D is then given by

ersq 2
 
D [f ] = E(X,Y )∼D (f (X) − Y ) .

What would be the optimal regression model for D under the above loss? We have,

ersq 2
  
D [f ] = EX EY |X (f (X) − Y ) .

Now, for each x, we know (and it is easy to see) that the value c minimizing EY |X=x [(c − Y )2 ] is given
by c∗ = E[Y |X = x]. Therefore the optimal regression model is simply the conditional expectation
function, also called the regression function of Y on x:

f ∗ (x) = E[Y |X = x] .

1
2 Least Squares Regression

The conditional expectation function plays the same role for regression w.r.t. squared loss as does a Bayes
optimal classifier for binary classification w.r.t. 0-1 loss. The minimum achievable squared error w.r.t. D is
simply
ersq,∗ = inf ersq sq ∗ 2
    
D D [f ] = erD [f ] = EX EY |X (Y − E[Y |X]) = EX Var[Y |X] ,
f :X →R

which is simply the expectation over X of the conditional variance of Y given X; this plays the same role as
the Bayes error for 0-1 binary classification.

2 Linear Least Squares Regression

For the remainder of the lecture, let X = Rd , and let S = ((x1 , y1 ), . . . , (xm , ym )) ∈ (X × R)m . We start
with a simple approach which does not make any assumptions about the underlying probability distribution,
model of the form fw (x) = w> x to the data by minimizing the empirical
but simply fits a linear regression P
sq 1 m
squared error on S, erb S [fw ] = m i=1 (fw (xi ) − yi )2 :
m
1 X > 2
min w xi − yi . (1)
w∈Rd m i=1

Setting the gradient of the above objective to zero yields


m
2 X > 
w xi − yi xi = 0 .
m i=1

We can rewrite this using matrix notation as follows: let


− x>
   
1 − y1
 − x> 2 −
  y2 
X=  ∈ Rm×d and y =   ∈ Rm ;
   
.. ..
 .   . 
− x>
m− ym
then we have
X> Xw − X> y = 0 .
These are known as the normal equations for least squares regression and yield the following solution for
w (assuming X> X is non-singular):
b = (X> X)−1 X> y .
w
The linear least squares regression model is then given by
b >x .
fS (x) = w

The solution wb can be viewed as performing an orthogonal projection of the label vector y in Rm onto the
d-dimensional subspace (assuming m > d) spanned by the d vectors x̃k = (x1k , . . . , xmk )> ∈ Rm , k = 1, . . . , d
(in particular, the vector y
b = Xw
b constitutes the projection of y onto this subspace). We will see below
that the same regression model also arises as a maximum likelihood solution under suitable probabilistic
assumptions. Before doing so, we discuss two variants of the above model that are widely used in practice.

3 Ridge Regression and Lasso

We saw above that the simple least squares regression model requires X> X to be non-singular; indeed, when
X> X is close to being singular (which is the case if two or more columns of X are nearly co-linear), then
Least Squares Regression 3

w
b can contain large values that lead to over-fitting the training data. To prevent this, one often adds a
penalty term or a regularizer to the objective in Eq. (1) that penalizes large values in w (such methods
are also referred to as parameter shrinkage methods in statistics).
Pd
A widely used regularizer is the L2 regularizer kwk22 = j=1 wj2 , leading to the following:
m
1 X > 2
min w xi − yi + λkwk22 , (2)
w∈Rd m i=1

where λ > 0 is a suitable regularization parameter that determines the trade-off between the two terms.
Setting the gradient of the above objective to zero again yields a closed-form solution for w:
−1
b = X> X + λmId
w X> y ,

where Id denotes the d × d identity matrix; note that the matrix X> X + λmId is non-singular. The


resulting regression model,


b >x ,
fS (x) = w
is known as ridge regression and is widely used in practice.1
Pd
Another regularizer that is frequently used is the L1 regularizer kwk1 = j=1 |wj |, which leads to
m
1 X > 2
min w xi − yi + λkwk1 , (3)
w∈Rd m i=1

where λ > 0 is again a suitable regularization parameter. This can be formulated as a quadratic programming
problem which can be solved using numerical optimization methods. For large enough λ, the solution w b
turns out to be sparse, in the sense that many of the parameter values in w b are equal to zero, so that the
resulting regression model depends on only a small number of features. The L1 -regularized least squares
regression model is known as lasso and is also widely used, especially in high-dimensional problems where
d is large and dependence on a small number of features is desirable.
For both L1 and L2 regularizers, the regularization parameter λ determines the extent of the penalty for
large values in the parameter vector w. In practice, one generally selects λ heuristically from some finite
range using a validation set (which involves holding out part of the training data for validation, training
on the remaining data with different values of λ, and selecting the one that gives highest performance on
the validation data) or cross-validation (which involves dividing the training sample into some k sub-
samples/folds, holding out one of these folds at a time and training on the remaining k − 1 folds with
different values of λ, testing performance on the held-out fold, and repeating this procedure for all k folds;
the value of λ that gives the highest average performance over the k folds is then selected2 ). In recent years,
algorithms for certain models (including lasso) have been developed that can efficiently compute the entire
path of solutions for all values of λ. Below we will also see a Bayesian interpretation of these regularizers;
this gives another approach to selecting λ.

4 Probabilistic View

We will now make a specific assumption on the conditional distribution of Y given x, and will see that
estimating the parameters of that distribution from the training sample using maximum likelihood estimation
and using the conditional expectation associated with the estimated distribution as our regression model
1 The same regularizer is also widely used in logistic regression, leading to L2 -regularized logistic regression.
2 An extreme case of cross-validation with k = m leads to what is called leave-one-out validation.
4 Least Squares Regression

will recover the linear least squares regression model described above. We will also see that under the same
probabilistic assumption, maximum a posteriori (MAP) estimation of the parameters under suitable priors
will yield ridge regression and lasso.
Specifically, assume that given x ∈ X , a label Y is generated randomly as follows:

Y = w> x +  ,

where w ∈ Rd and  ∼ N (0, σ 2 ) is some normally distributed noise with variance σ 2 > 0. In other words,
we have
Y |X = x ∼ N (w> x, σ 2 ) ,
so that the conditional density of Y given x can be written as

1  (y − w> x)2 
p(y | x; w, σ) = √ exp − .
2πσ 2σ 2
Clearly, in this case, the optimal regression model (under squared error) is given by

f ∗ (x) = E[Y |X = x] = w> x .

In practice, the parameters w, σ are unknown and must be estimated from the training sample S =
((x1 , y1 ), . . . , (xm , ym )), which is assumed to contain examples drawn i.i.d. from the same distribution. Let
us first proceed with maximum likelihood estimation.
Maximum likelihood estimation. We can write the conditional likelihood of w, σ as
m m  (y − w> x )2 
 Y Y 1 i i
L(w, σ) = p y1 , . . . , ym | x1 , . . . , xm ; w, σ = p(yi | xi ; w, σ) = √ exp − 2
.
i=1 i=1
2πσ 2σ

The log-likelihood becomes


m
m X (yi − w> xi )2
ln L(w, σ) = − ln(2π) − m ln σ − .
2 i=1
2σ 2

Clearly, maximizing the above log-likelihood w.r.t. w is equivalent to simply minimizing the empirical squared
error on S, yielding the same solution as above:

b = (X> X)−1 X> y .


w

This yields the same linear least squares regression model as above:

b >x .
fS (x) = w

The variance parameter σ does not play a role in the regression model, but can be useful in determining the
uncertainty in the model’s prediction at any point. It can be estimated by maximizing the log-likelihood
above w.r.t. σ, which gives
m
1 X 2
b2 =
σ b > xi .
yi − w
m i=1

Maximum a posteriori (MAP) estimation. Continuing with the normal (Gaussian) noise model above,
we can estimate w using maximum a posteriori (MAP) estimation under a suitable prior rather than using
maximum likelihood estimation. For example, let us assume a zero-mean, isotropic normal prior on w. In
other words, denoting by W the random variable corresponding to w, we have

W ∼ N 0, σ02 Id ,

Least Squares Regression 5

where Id denotes the d×d identity matrix; this is equivalent to assuming that the prior selects each component
of W independently from a N (0, σ02 ) distribution. The prior density can be written as
1  1 
p(w) = d/2 d
exp − 2 kwk22 .
(2π) σ0 2σ0

Assuming for simplicity that the noise variance parameter σ is known, the posterior density of w given the
data S then takes the form
m  (y − w> x )2 
 1  Y
i i
p(w|S) ∝ exp − 2 kwk22 · exp − 2
,
2σ0 i=1

giving
m
1 1 X 2
2
ln p(w|S) = − 2 kwk2 − 2 yi − w> xi + const .
2σ0 2σ i=1
The MAP estimate of w is obtained by maximizing this w.r.t. w; clearly, this is equivalent to solving the
following L2 -regularized least squares regression problem:
m
1 X > 2 σ2 2
min w xi − yi + 2 kwk2 .
w∈Rd m mσ 0
i=1

This provides an alternative view of ridge regression, and suggests that where it is suitable to assume the
above conditional distribution with noise variance σ 2 and an isotropic normal prior on w with variance σ02 ,
σ2
an appropriate choice for the regularization parameter is given by λ = mσ 2.
0

If instead of an isotropic normal prior we assume an isotropic Laplace prior with density
 λ d  
0
p(w) = exp − λ0 kwk1 ,
2
then the posterior density becomes
  Y m  (y − w> x )2 
i i
p(w|S) ∝ exp − λ0 kwk1 · exp − 2
,
i=1

with
m
1 X 2
ln p(w|S) = −λ0 kwk1 − 2
yi − w> xi + const .
2σ i=1
In this case, finding the MAP estimate of w is equivalent to solving the following L1 -regularized least squares
regression problem:
m
1 X > 2 2σ 2 λ0
min w x i − yi + kwk1 .
w∈Rd m m
i=1

Again, this provides an alternative view of lasso, and suggests that where it is suitable to assume the above
conditional distribution with noise variance σ 2 and an isotropic Laplace prior on w with parameter λ0 , an
2
appropriate choice for the regularization parameter is given by λ = 2σmλ0 .
Exercise. Show that for any f : X →R, the squared-error regret of f , i.e. the difference of its squared
error from the optimal, is equal to the expected squared difference between f (X) and E[Y |X]:
2 
ersq sq,∗ 
D [f ] − erD = EX f (X) − E[Y |X] .

You might also like