Regression Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Regeression Analysis

1 Overview

1.1 Multiple Linear Regression Model

The multiple linear regression model is set as this:


n cases:
i = 1, 2, .., n
1 Response(dependent) variable:

yi , i = 1, 2, .., n

p Explanatory(independent) variables:

xi = (xi,1 , xi,2 , ..., xi,p )T , i = 1, 2, .., n

An explanatory variable is the expected cause, and it explains the results. A


response variable is the expected effect, and it responds to explanatory variables.
And the goal of regression analysis is to extract the relationship between the
response variable and the explanatory variables.

1.1.1 Genegral Linear Model And It’s extensive Modes

General Linear Model For each i, the conditional distribution [yi |xi ] is
given by:
yi = ŷi + ϵ
where

ŷi = pj=1 β1 xi,j
β = (β1 , β2 , ..., βp )T are p regression parameters
ϵ varies from different model

1.1.2 Extensive breadth possible models

1. Polynomial Model
xi,j is replaced by (xi )j

2
2. Fourier Model
xi,j is replaced by sin(jxi ), cos(jxi )
3. Time series regressions
time indexed by i, and explanatory variables include lagged response variable.

1.2 Steps for Fitting a Model

(1)Propose a model
1.specify the scale of response variable Y
2.select the appropriate form of independent variable X
3.Assuming the distribution of ϵ
(2) Specify a criterion for judging the parameter
(3) Applying the parameter to the given data
(4) Check the assumptions in (1)
In step(1) we have different form of Residual Distribution
Gauss-Markov: zero mean, constant variance, uncorrelated
Normal-linear models: ϵi are i.i.d N (0, σ 2 )
Generalized Gauss-Markov:zero mean, and general covariance matrix, which
means the convariance matrix do not have zero element.

1.3 Ordinary Least Squares

We re write the general linear model into the matrix form:


     T
y1 β1 x1,1 x1,2 ... x1,n
     
 y2  β2  x2,1 x2,2 ... x2,n 
y=    
 ...  β =  ...  x =  ... · · · ...


     
yp βp xp,1 xp,2 ... xp,n
The ordinary least squares is turn the appropriate parameter problem into
a convex optimize question:

n
minQ(β) = ||yi − ŷi ||2
i=1

3
 
yˆ1
 
yˆ2 
ŷ =  
 ...  = (βx) , Q(β) = (y − xβ) (y − xβ)
T T

 
yˆp
∂Q(β)
β̂ solves OLS when ∂ βˆj
= 0, i = {1, 2, ..., p}

∂ ∑
p
∂Q(β)
= [y − (β1 xj,1 + β2 xj,2 + ... + βp xj,p )]2
∂pi ∂pi j=1

p
= −2 xi,j (y − (β1 xj,1 + β2 xj,2 + ... + βp xj,p ))
j=1

= −2xT[j] (y − xβ)
 
−2xT[1] (y − xβ)
 
∂Q(β)  −2x T
(y − xβ) 
=

[2]
.
 = −2xT (y − xβ)

β  .. 
−2xT[p] (y − xβ)

−2xT (y − xβ̂) = 0 ⇒ xT y − xT xβ̂ = 0 ⇒ β̂ = (xT x)−1 xT y

And we called H = (xT x)−1 xT y ”hat matrix”, H is a kind of special matrix


called projection matrix.

You might also like