Econometrics I 3

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

Applied Econometrics

William Greene
Department of Economics
Stern School of Business
Applied Econometrics

3. Linear Least Squares


Vocabulary
 Some terms to be used in the discussion.
 Population characteristics and entities vs. sample
quantities and analogs
 Residuals and disturbances
 Population regression line and sample regression
 Objective: Learn about the conditional mean
function. Estimate  and 2
 First step: Mechanics of fitting a line
(hyperplane) to a set of data
Fitting Criteria
 The set of points in the sample
 Fitting criteria - what are they:
 LAD
 Least squares
 and so on
 Why least squares? (We do not call it ‘ordinary’ at
this point.)
 A fundamental result:
Sample moments are “good” estimators of
their population counterparts
We will spend the next few weeks using this principle
and applying it to least squares computation.
An Analogy Principle
In the population E[y | X ] = X so
E[y - X |X] = 0
Continuing E[xi i] = 0
Summing, Σi E[xi i] = Σi 0 = 0
Exchange Σi and E E[Σi xi i] = E[ X ] = 0
E[ X (y - X) ] = 0

Choose b, the estimator of  to mimic this population result: i.e.,


mimic the population mean with the sample mean
1 1
X e = 0  X (y - Xb)
Find b such that n n
As we will see, the solution is the least squares coefficient vector.
Population and Sample Moments
We showed that E[i|xi] = 0 and Cov[xi,i] = 0.
If it is, and if E[y|X] = X, then

 = (Var[xi])-1 Cov[xi,yi].

This will provide a population analog to the


statistics we compute with the data.
An updated version, 1950 – 2004 used in the problem sets.
Least Squares
 Example will be, yi = Gi on
xi = [a constant, PGi and Yi] = [1,Pgi,Yi]
 Fitting criterion: Fitted equation will be
yi = b1xi1 + b2xi2 + ... + bKxiK.
 Criterion is based on residuals:
ei = yi - b1xi1 + b2xi2 + ... + bKxiK
Make ei as small as possible.
Form a criterion and minimize it.
Fitting Criteria
n
 Sum of residuals: i 1 ei
n
 Sum of squares: i 1 ei2
n
 Sum of absolute values of residuals: i 1 ei
n
 Absolute value of sum of residuals i 1 ei

n

n 2
 We focus on e now and
i 1 i i 1
ei later
Least Squares Algebra

 i1 i  ee = (y - Xb)'(y - Xb)


n 2
e
A digression on multivariate calculus.
Matrix and vector derivatives.
Derivative of a scalar with respect to a vector
Derivative of a column vector wrt a row vector
Other derivatives
Least Squares Normal Equations

(y - Xb)'(y - Xb)


 2 X'(y - Xb) = 0
b
(1x1)/(kx1) (-2)(nxK)'(nx1)
= (-2)(Kxn)(nx1) = Kx1
Note: Derivative of 1x1 wrt Kx1 is a Kx1 vector.

Solution: X'y = X'Xb


Least Squares Solution

Assuming it exists: b = (X'X) -1X'y


Note the analogy:  =  Var(x)  Cov(x,y) 
1

1
1  1 
b=  X'X   X'y 
n  n 
Suggests something desirable about least squares
Second Order Conditions

(y - Xb)'(y - Xb)


 2 X'(y - Xb)
b
 (y - Xb)'(y - Xb) 
 
 2 (y - Xb)'(y - Xb)  b 
=
bb b
 column vector
=
 row vector

= 2X'X
Does b Minimize e’e?

 in1 xi21 in1 xi1 xi 2 ... in1 xi1 xiK 


 n n 2 n 
2
 e'e  x x  x ... i 1 xi 2 xiK 
 2 X'X = 2  i 1 i 2 i1 i 1 i 2

bb'  ... ... ... ... 


 n n n 2 
i 1 xiK xi1 i 1 xiK xi 2 ... i 1 xiK 

If there were a single b, we would require this to be


positive, which it would be; 2x'x = 2 i 1 xi2  0.
n

The matrix counterpart of a positive number is a


positive definite matrix.
Sample Moments - Algebra

 in1 xi21 in1 xi1 xi 2 ... in1 xi1 xiK   xi21 xi1 xi 2 ... xi1 xiK 
 n n 2   
 x x  x ... in1 xi 2 xiK  n  xi 2 xi1 xi22 ... xi 2 xiK 
X'X =  i 1 i 2 i1 i 1 i 2
=i 1
 ... ... ... ...   ... ... ... ... 
 n n   
i 1 xiK xi1 i 1 xiK xi 2 ... in1 xiK2   xiK xi1 xiK xi 2 ... xiK2 
 xi1 
x 
=in1  i 2   xi1 xi 2 ... xiK 
 ... 
 
 xik 
=in1xi xi
Positive Definite Matrix
Matrix C is positive definite if a'Ca is > 0
for any x.
Generally hard to check. Requires a look at
characteristic roots (later in the course).
For some matrices, it is easy to verify. X'X is
one of these.

 k=1 k  0
K 2
a'X'Xa = (a'X )( X'a) = ( X'a)'( X'a) = v'v = v
Could v = 0?
Conclusion: b = ( X'X)-1 X'y does indeed minimize e'e.
Algebraic Results - 1

In the population : E[ X'] = 0


1 n
In the sample :
n
 i1
x ie i  0
Residuals vs. Disturbances

Disturbances (population) y i  x i  i


Partitioning y: y = E[y|X ] + ε
= conditional me an + dis turbance
Residuals (sample) y i  x i  e i
Partitioning y : y = Xb + e
= proje ction + re s idual
( Note : Proje ction 'into the column s pace of X)
Algebraic Results - 2
 The “residual maker” M = (I - X(X’X)-1X’)
 e = y - Xb= y - X(X’X)-1X’y = My
 MX = 0 (This result is fundamental!)
How do we interpret this result in terms of residuals?
 (Therefore) My = MXb + Me = Me = e
(You should be able to prove this.
 y = Py + My, P = X(X’X)-1X’ = (I - M).
PM = MP = 0. (Projection matrix)
 Py is the projection of y into the column space of X.
(New term?)
The M Matrix
 M = I- X(X’X)-1X’ is an nxn matrix
 M is symmetric – M = M’
 M is idempotent – M*M = M
(just multiply it out)
 M is singular – M-1 does not exist.
(We will prove this later as a side result in
another derivation.)
Results when X Contains a
Constant Term
 X = [1,x2,…,xK]
 The first column of X is a column of ones
 Since X’e = 0, x1’e = 0 – the residuals sum to zero.
y  Xb + e
Define i  [1,1,...,1]'  a column of n ones


n
i'y = i=1
y i  ny
i'y  i'Xb + i'e = i'Xb
implies (after dividing by n)
y  x b (the regression line passes through the means)
These do not apply if the model has no constant term.
Least Squares Algebra
Least Squares
Residuals
Least Squares Residuals
Least Squares Algebra-3

M is nxn potentially huge


Least Squares Algebra-4

You might also like