Regression Analysis and Multiple Regression: Session 7

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 100

Regression Analysis and

Multiple Regression
Session 7
Simple Linear Regression Model
• Using Statistics
• The Simple Linear Regression Model
• Estimation: The Method of Least Squares
• Error Variance and the Standard Errors of Regression Estimators
• Correlation
• Hypothesis Tests about the Regression Relationship
• How Good is the Regression?
• Analysis of Variance Table and an F Test of the Regression Model
• Residual Analysis and Checking for Model Inadequacies
• Use of the Regression Model for Prediction
• Using the Computer
• Summary and Review of Terms
7-1 Using Statistics
This scatterplot locates pairs of Scatterplot of Advertising Expenditures (X) and Sales (Y)
observations of advertising 140

120
expenditures on the x-axis and sales 100
on the y-axis. We notice that:

Sales
80

60

40
Larger (smaller) values of sales tend to 20
be associated with larger (smaller) 0

values of advertising. 0 10 20 30
A d ve rtising
40 50

The scatter of points tends to be distributed around a positively sloped


straight line.
The pairs of values of advertising expenditures and sales are not located
exactly on a straight line. The scatter plot reveals a more or less strong
tendency rather than a precise linear relationship. The line represents the
nature of the relationship on average.
Examples of Other Scatterplots
0

Y
Y
Y

X 0 X X
Y

Y
X X X
Model Building
The inexact nature of the Data In ANOVA, the systematic
relationship between component is the variation
advertising and sales of means between samples
or treatments (SSTR) and
suggests that a
statistical model might
Statistical the random component is
model the unexplained variation
be useful in analyzing the (SSE).
relationship.
In regression, the
A statistical model Systematic systematic component is
separates the the overall linear
component relationship, and the
systematic component
of a relationship from the + random component is the
random component. Random variation around the line.
errors
7-2 The Simple Linear
Regression Model
The population simple linear regression model:
Y= 0 + 1 X + 
Nonrandom or Random
Systematic Component
Component

where Y is the dependent variable, the variable we wish to explain or


predict; X is the independent variable, also called the predictor
variable; and  is the error term, the only random component in the
model, and thus, the only source of randomness in Y.

0 is the intercept of the systematic component of the regression relationship.


1 is the slope of the systematic component.
The conditional mean of Y:
E[Y X ]   0   1 X
Picturing the Simple Linear
Regression Model
Y
Regression Plot
The simple linear regression
model posits an exact linear
relationship between the
expected or average value of Y,
the dependent variable, and X,
E[Y]=0 + 1 X
the independent or predictor
Yi variable:
{
Error: i } 1 = Slope

E[Yi]=0 + 1 Xi
}

1
Actual observed values of Y differ
0 = Intercept from the expected value by an
unexplained or random error:

Yi = E[Yi] + i
Xi
X
= 0 + 1 Xi + i
Assumptions of the Simple
Linear Regression Model
Assumptions of the Simple
• The relationship between X and Y Y Linear Regression Model
is a straight-line relationship.
• The values of the independent
variable X are assumed fixed (not
random); the only randomness in
the values of Y comes from the E[Y]=0 + 1 X
error term i.
• The errors i are normally
distributed with mean 0 and
variance 2. The errors are
uncorrelated (not related) in Identical normal
distributions of errors,
successive observations. That is: all centered on the
~ N(0,2) regression line.

X
7-3 Estimation: The Method of
Least Squares
Estimation of a simple linear regression relationship involves finding
estimated or predicted values of the intercept and slope of the linear
regression line.
The estimated regression equation:
Y=b0 + b1X + e

where b0 estimates the intercept of the population regression line, 0 ;


b1 estimates the slope of the population regression line, 1;
and e stands for the observed errors - the residuals from fitting the estimated
regression line b0 + b1X to a set of n points.
The estimated regression line:

Y  b0 + b1 X
where Y (Y - hat) is the value of Y lying on the fitted regression line for a given
value of X.
Fitting a Regression Line
Y Y

Data
Three errors from the
least squares regression
X line X
Y e

Three errors Errors from the least


from a fitted line squares regression
line are minimized
X X
Errors in Regression
Y

.
Y  b0  b1 X the fitted regression line
Yi

Yi
{
Error ei  Yi  Yi
Yi the predicted value of Y for X
i

X
Least Squares Regression
The sum of squared errors in regression is:
n n
SSE = e 
i=1
2
i  i i
(y
i=1
 y
 ) 2

The least squares regression line is that which minimizes the SSE
with respect to the estimates b 0 and b 1 .

The normal equations: SSE b0

n n

y
i=1
i  nb0  b1  x i
i=1
Least squares b0
n n n

x y i i b0  x i  b1  x 2i
i=1 i=1 i=1 Least squares b1 b1
Sums of Squares, Cross Products,
and Least Squares Estimators
Sums of Squares and Cross Products:
  x
2

SSx   (x  x )   x 
2 2

n 2
SS y   ( y  y )   y 
2 2   y
n
SSxy   ( x  x )( y  y )   xy 
  x  ( y )
n
Least  squares regression estimators:

SS XY
b1 
SS X

b0  y  b1 x
Example 7-1
Miles Dollars Miles 2 Miles*Dollars 2
1211
1345
1802
2405
1466521
1809025
2182222
3234725 2  x 
1422 2005 2022084 2851110
SS x   x 
1687 2511 2845969 4236057 n
2
1849 2332 3418801 4311868 79448
2026 2305 4104676 4669930  293426944   40947552
2133 3016 4549689 6433128 25
2253
2400
3385
3090
5076009
5760000
7626405
7416000  
 x ( y)
2468 3694 6091024 9116792 SS xy   xy 
2699 3371 7284601 9098329 n
2806 3998 7873636 11218388 2
3082 3555 9498724 10956510 106605
3209 4692 10297681 15056628  390185024   51402848
3466 4244 12013156 14709704 25
3643 5298 13271449 19300614 SS XY 51402848
3852 4801 14837904 18493452 b1    1.255333776  1.26
4033 5147 16265089 20757852 SS X 40947552
4267 5738 18207288 24484046
4498 6420 20232004 28877160 106605 
 (1.255333776) 
79448
4533 6059 20548088 27465448
b0  y  b1 x  
4804
5090
6426
6321
23078416
25908100
30870504
32173890
25  25 
5233 7026 27384288 36767056  274.85
5439 6964 29582720 37877196
79498 10605 293426944 390185024
Example 7-1: Using the
Computer
MTB > Regress 'Dollars' 1 'Miles';
Regression of Dollars Charged against Miles SUBC> Constant.

Regression Analysis
8000

7000
The regression equation is
6000 Dollars = 275 + 1.26 Miles

5000 Predictor Coef Stdev t-ratio p


D ollars

Constant 274.8 170.3 1.61 0.120


4000
Miles 1.25533 0.04972 25.25 0.000
3000
Y = 274.850 + 1.25533X s = 318.2 R-sq = 96.5% R-sq(adj) = 96.4%
2000 R-Squared = 0.965
Analysis of Variance
1000

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
SOURCE DF SS MS F p
Regression 1 64527736 64527736 637.47 0.000
M i le s Error 23 2328161 101224
Total 24 66855896
Example 7-1: Using Computer-
Excel
The results on the right side are the output created by selecting
REGRESSION option from the DATA ANALYSIS toolkit.
SUMMARY OUTPUT

Regression Statistics
Multiple R 0.98243393
R Square 0.965176428
Adjusted R Square 0.963662359
Standard Error 318.1578225
Observations 25

ANOVA
df SS MS F Significance F
Regression 1 64527736.8 64527736.8 637.4721586 2.85084E-18
Residual 23 2328161.201 101224.4
Total 24 66855898

Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 274.8496867 170.3368437 1.61356569 0.120259309 -77.51844165 627.217815 -77.51844165 627.217815
MILES 1.255333776 0.049719712 25.248211 2.85084E-18 1.152480856 1.358186696 1.152480856 1.358186696
Example 7-1: Using Computer-
Excel
Residual Analysis. The plot shows the absence of a
relationship between the residuals and the X-values (miles).

Residuals vs. Miles

600
400
200
Residuals

0
-200 0 1000 2000 3000 4000 5000 6000

-400
-600
-800
Miles
Total Variance and Error
Variance
Y Y

X X

What you see when


What you see when
looking at the total
looking along the
variation of Y.
regression line at the
error variance of Y.
7-4 Error Variance and the
Standard Errors of Regression
Estimators
Y
Degrees of Freedom in Regression:

df = (n - 2) (n total observations less one degree of freedom


for each parameter estimated (b 0 and b1 ) )
2 Square and sum all
2 ( SS XY ) regression errors to find
SSE =  ( Y - Y )  SSY  SSE.
SS X X

= SSY  b1SS XY Example 10 - 1:


SSE = SS Y  b1 SS XY
2 2  66855898  (1.255333776 )( 51402852 .4 )
An unbiased estimator of s , denoted by S :
 2328161.2
SSE 2328161.2
SSE MSE  
MSE = n2 23
(n - 2)  101224 . 4
s  MSE  101224 . 4  318.158
Standard Errors of Estimates
in Regression
The standard error of b0 (intercept): Example 10 - 1:
2
s x
s(b0 ) 
s(b0 ) 
s x 2
nSS X
nSS X 
318.158 293426944
( 25)( 4097557.84 )
where s = MSE  170.338
s
s(b1 ) 
The standard error of b1 (slope): SS X
318.158
s 
s(b1 )  40947557.84
 0.04972
SS X
Confidence Intervals for the
Regression Parameters
A (1 -  ) 100% confidence interval for b :
0
b  t  s (b ) Example 10 - 1
0  ,(n 2 ) 0
2  95% Confidence Intervals:
b t s (b )
0  0.025,( 25 2 ) 0
A (1 -  ) 100% confidence interval for b :
1 = 274.85  ( 2.069) (170.338)
b  t  s (b )  274.85  352.43
1  ,(n 2 ) 1
2 
 [ 77.58, 627.28]
20

Least-squares point estimate:


58

b1=1.25533
b1  t s (b1 )
.3

 0.025,( 25 2 )
1
e:
p
slo
on

= 1.25533  ( 2.069) ( 0.04972 )


d

Height = Slope
un
bo

6
 1.25533  0.10287
5%

24
.15
r9

:1
pe

nd
 [115246
. ,1.35820]
Up

u
bo
%
95
er
L ow
0 (not a possible value of the
Length = 1
regression slope at 95%)
7-5 Correlation
The correlation between two random variables, X and Y, is a measure of
the degree of linear association between the two variables.
The population correlation, denoted by, can take on any value from -1
to 1.
indicates a perfect negative linear relationship
-1<  <0 indicates a negative linear relationship
indicates no linear relationship
0<  <1 indicates a positive linear relationship
indicates a perfect positive linear relationship

The absolute value of  indicates the strength or exactness of the


relationship.
Illustrations of Correlation
Y =-1 Y =0 Y =1

X X X

Y =-.8 Y =0 Y =.8

X X X
Covariance and Correlation
The covariance of two random variables X and Y:
Cov ( X , Y )  E [( X   )(Y   )]
X Y
where  and  Y are the population means of X and Y respectively.
X

The population correlation coefficient: Example 10 - 1:


Cov ( X , Y ) SS
= r= XY
  SS SS
X Y X Y
51402852.4
The sample correlation coefficient * : 
( 40947557.84)( 66855898)
SS
XY 51402852.4
r=  .9824
SS SS 52321943.29
X Y

*Note: If  < 0, b1 < 0 If  = 0, b1 = 0 If  > 0, b1 >0


Example 7-2:
Using Computer-Excel
SUMMARY OUTPUT
X Vari
Regression Statistics
Multiple R 0.992265946 10
R Square 0.984591707 5

Y
Adjusted R Square 0.98266567
0
Standard Error 0.279761372
Observations 10 0 5
XV
ANOVA
df SS MS F Significance F
Regression 1 40.0098686 40.0098686 511.2009204 1.55085E-08
Residual 8 0.626131402 0.078266425
Total 9 40.636

Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept -8.762524695 0.594092798 -14.74942084 4.39075E-07 -10.13250603 -7.39254336 -10.13250603 -7.39254336
US 1.423636087 0.062965575 22.60975277 1.55085E-08 1.278437117 1.568835058 1.278437117 1.568835058

RESIDUAL OUTPUT

Observation Predicted Y Residuals


1 2.057109569 0.242890431
2 2.484200395 0.115799605
3 3.05365483 -0.15365483
4 3.480745656 -0.280745656
5 3.765472874 -0.065472874
6 4.050200091 0.049799909
7 4.619654526 0.180345474
8 5.758563396 -0.058563396
9 7.466926701 -0.466926701
10 8.463471962 0.436528038
Example 7-2: Regression Plot
Regression Plot
Y = -8.76252 + 1.42364X R-Sq = 0.9846

7
International

8 9 10 11 12

United States
Hypothesis Tests for the
Correlation Coefficient
Example 10 -1:
r
H0: =0 (No linear relationship) t( n 2 ) 
H1: 0 (Some linear relationship) 1 r2
n2
0.9824
r =
Test Statistic: t( n  2 )  1 - 0.9651
1 r2
25 - 2
n2 0.9824
=  25.25
0.0389
t0. 005  2.807  25.25
H 0 rejected at 1% level
Hypothesis Tests about the
Regression Relationship
Constant Y Unsystematic Variation Nonlinear Relationship
Y Y Y

X X X
A hypothesis test for the existence of a linear relationship between X and Y:
H0: 1  0
H1:  1  0
Test statistic for the existence of a linear relationship between X and Y:
b
 1
t
(n - 2) s (b )
1
where b is the least - squares estimate of the regression slope and s ( b ) is the standard error of b .
1 1 1
When the null hypothesis is true, the statistic has a t distribution with n - 2 degrees of freedom.
Hypothesis Tests for the
Regression Slope
Example 10 - 1: Example 10 - 3:
H0: 1  0 H 0: 1  1
H1:  1  0 H1 :  1  1
b b 1
1 1
t  t 
(n - 2) s(b ) ( n - 2) s( b )
1 1
1.25533 1.24 - 1
=  25.25 =  1.14
0.04972 0.21

t  2.807  25.25 t  1.671  1.14


( 0 . 005 , 23 ) ( 0. 05 , 58 )
H 0 is rejected at the 1% level and we may H 0 is not rejected at the 10% level.
conclude that there is a relationship between We may not conclude that the beta
charges and miles traveled. coefficient is different from 1.
7-7 How Good is the
Regression?
The coefficient of determination, r2, is a descriptive measure of the strength of
the regression relationship, a measure of how well the regression line fits the data.

( y  y )  ( y  y)  ( y  y )
Y Total = Unexplained Explained
Deviation Deviation Deviation
. (Error) (Regression)

}
Y

Y
Unexplained Deviation
{ Total Deviation
2 2
 ( y  y )   ( y  y)   ( y  y )
2

Y
Explained Deviation
{ SST = SSE + SSR

Percentage of
2 SSR SSE
r   1 total variation
SST SST explained by the
X regression.
X
The Coefficient of
Determination
Y Y Y

X X X
SST SST SST
S
r2=0 SSE r2=0.50 SSE SSR r2=0.90 S SSR
E
7000

Example 10 -1: 6000

5000

Dollars
SSR 64527736.8 4000

r  2
  0.96518 3000
SST 66855898 2000

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
Miles
7-8 Analysis of Variance and an F
Test of the Regression Model
Source of Sum of Degrees of
Variation Squares Freedom Mean Square F Ratio

Regression SSR (1) MSR MSR


MSE
Error SSE (n-2) MSE
Total SST (n-1) MST

Example 10-1
Source of Sum of Degrees of
Variation Squares Freedom F Ratio p Value
Mean Square
Regression 64527736.8 1 64527736.8 637.47 0.000
Error 2328161.2 23 101224.4
Total 66855898.0 24
7-9 Residual Analysis and Checking
for Model Inadequacies
Residuals Residuals

0 0

x or y x or y

Homoscedasticity: Residuals appear completely Heteroscedasticity: Variance of residuals changes


random. No indication of model inadequacy. when x changes.

Residuals Residuals

0 0

Time x or y

Curved pattern in residuals resulting from underlying


Residuals exhibit a linear trend with time. nonlinear relationship.
7-10 Use of the Regression
Model for Prediction
• Point Prediction
 A single-valued estimate of Y for a given value of X
obtained by inserting the value of X in the estimated
regression equation.
• Prediction Interval
 For a value of Y given a value of X
 Variation in regression line estimate.
 Variation of points around regression line.
 For an average value of Y given a value of X
 Variation in regression line estimate.
Errors in Predicting E[Y|X]
Y Upper limit on slope Y Upper limit on intercept
Regression line Regression line

Lower limit on slope


Y Y Lower limit on intercept

X X X X

1) Uncertainty about the 2) Uncertainty about the


slope of the regression line intercept of the regression line
Prediction Interval for E[Y|X]
• The prediction band for E[Y|X]
Y Prediction band for E[Y|X] is narrowest at the mean
Regression
line value of X.
• The prediction band widens
Y as the distance from the mean
of X increases.
• Predictions become very
unreliable when we
X X
extrapolate beyond the range
Prediction Interval for E[Y|X] of the sample itself.
Additional Error in Predicting
Individual Value of Y
Y
Regression line Y Prediction band for E[Y|X]
Regression
line

Prediction band for Y

X X X
3) Variation around the
regression line. Prediction Interval for E[Y|X]
Prediction Interval for a Value
of Y
A (1 -  ) 100% prediction interval for Y:

1 (x  x ) 2
y  t 1 
2
n SS X

Example 10 -1 (X = 4000):

1 (4000  3177.92 ) 2
274.85  (1.2553)(4000)  2.069 1  
25 40947557.84
 5296.05  676.62  [4619.43,5972.67 ]
Prediction Interval for the
Average Value of Y
A (1 -  ) 100% prediction interval for the E[Y X]:

1 (x  x ) 2
y  t 
2
n SS X

Example 10 -1 (X = 4000):

1 (4000  3177.92 ) 2
274.85  (1.2553)(4000)  2.069 
25 40947557.84
 5296.05  156.48  [5139.57 ,5452.53]
Using the Computer
MTB > regress 'Dollars' 1 'Miles' tres in C3 fits in C4;
SUBC> predict 4000;
SUBC> residuals in C5.
Regression Analysis

The regression equation is


Dollars = 275 + 1.26 Miles

Predictor Coef Stdev t-ratio p


Constant 274.8 170.3 1.61 0.120
Miles 1.25533 0.04972 25.25 0.000

s = 318.2 R-sq = 96.5% R-sq(adj) = 96.4%

Analysis of Variance

SOURCE DF SS MS F p
Regression 1 64527736 64527736 637.47 0.000
Error 23 2328161 101224
Total 24 66855896

Fit Stdev.Fit 95.0% C.I. 95.0% P.I.


5296.2 75.6 ( 5139.7, 5452.7) ( 4619.5, 5972.8)
Plotting on the Computer (1)

MTB > PLOT 'Resids' * 'Fits' MTB > PLOT 'Resids' *'Miles'

500 500
Resids

Resids
0 0

-500 -500

2000 3000 4000 5000 6000 7000 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
Fits Miles
Plotting on the Computer (2)

MTB > HISTOGRAM 'StRes' MTB > PLOT 'Dollars' * 'Miles'

8 7000
7
6000
6
Frequency

5 5000

Dollars
4
4000
3
2 3000
1
2000
0
-2 -1 0 1 2 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
StRes Miles
11 Multiple Regression (1)
• Using Statistics.
• The k-Variable Multiple Regression Model.
• The F Test of a Multiple Regression Model.
• How Good is the Regression.
• Tests of the Significance of Individual Regression
Parameters.
• Testing the Validity of the Regression Model.
• Using the Multiple Regression Model for
Prediction.
11 Multiple Regression (1)
• Qualitative Independent Variables.
• Polynomial Regression.
• Nonlinear Models and Transformations.
• Multicollinearity.
• Residual Autocorrelation and the Durbin-Watson Test.
• Partial F Tests and Variable Selection Methods.
• Using the Computer.
• The Matrix Approach to Multiple Regression Analysis.
• Summary and Review of Terms.
7-11 Using Statistics
y Lines y Planes
B

B
A

Slope: 1 C
A
x1
Intercept: 0

x2
x
Any two points (A and B), or Any three points (A, B, and C), or
an intercept and slope (0 an intercept and coefficients of x1
and 1), define a line on a and x2 (0 , 1, and 2), define a
two-dimensional surface. plane in a three-dimensional
surface.
7-12 The k-Variable Multiple
Regression Model
The population regression model of a
x2
dependent variable, Y, on a set of k y
independent variables, X1, X2,. . . , Xk 2
is given by:
Y= 0 + 1X1 + 2X2 + . . . + kXk +
1
where 0 is the Y-intercept of the 0
regression surface and each i , i =
1,2,...,k is the slope of the regression x1
surface - sometimes called the y   0   1 x1   2 x 2  
response surface - with respect to Xi.
Model assumptions:
1. ~N(0,2), independent of other errors.
2. The variables Xi are uncorrelated with the error term.
Simple and Multiple Least-
Squares Regression
Y y

x1
y  b0  b1x
X x2 y  b0  b1 x1  b2 x 2

In a simple regression In a multiple regression


model, the least-squares model, the least-squares
estimators minimize the estimators minimize the sum of
sum of squared errors from squared errors from the
the estimated regression estimated regression plane.
line.
The Estimated Regression
Relationship
The estimated regression relationship:
Y  b0  b1 X 1  b2 X 2 bk X k
where Y is the predicted value of Y, the value lying on the
estimated regression surface. The terms b0,...,k are the
least-squares estimates of the population regression
parameters i.
The actual, observed value of Y is the predicted value
plus an error:
y=b0+ b1 x1+ b2 x2+. . . + bk xk+e
Least-Squares Estimation:
The 2-Variable Normal
Equations
Minimizing the sum of squared errors with respect
to the estimated coefficients b0, b1, and b2 yields
the following normal equations:

 y  nb 0
 b1  x 1  b2  x 2

 x y b  x  b1  x 1  b2  x 1 x 2
2
1 0 1

x y b0  x 2  b1  x 1 x 2  b2  x 2
2
2
Example 7-3
Y X1 X2 X1 X2 X12 X2 2 X1Y X2Y Normal Equations:
72 12 5 60 144 25 864 360
76 11 8 88 121 64 836 608
78 15 6 90 225 36 1170 468 743 = 10b0+123b1+65b2
70 10 5 50 100 25 700 350
68 11 3 33 121 9 748 204 9382 = 123b0+1615b1+869b2
80 16 9 144 256 81 1280 720 5040 = 65b0+869b1+509b2
82 14 12 168 196 144 1148 984
65 8 4 32 64 16 520 260
62 8 3 24 64 9 496 186
90 18 10 180 324 100 1620 900 b0 = 47.164942
--- --- --- --- ---- --- ---- ---- b1 = 1.5990404
743 123 65 869 1615 509 9382 5040
b2 = 1.1487479
Estimated regression equation:

Y  47164942
.  15990404
. X 1  11487479
. X2
Example 7-3: Using the
Computer
SUMMARY OUTPUT

Regression Statistics Excel Output


Multiple R 0.980326323
R Square 0.961039699
Adjusted R Square 0.949908185
Standard Error 1.910940432
Observations 10

ANOVA
df SS MS F Significance F
Regression 2 630.5381466 315.2690733 86.33503537 1.16729E-05
Residual 7 25.56185335 3.651693336
Total 9 656.1

Coefficients Standard Error t Stat P-value Lower 95% Upper 95%


Intercept 47.16494227 2.470414433 19.09191496 2.69229E-07 41.32334457 53.00653997
X1 1.599040336 0.280963057 5.691283238 0.00074201 0.934668753 2.263411919
X2 1.148747938 0.30524885 3.763316185 0.007044246 0.426949621 1.870546256
Decomposition of the Total
Deviation in a Multiple
Regression Model


y
 Y  Y: Error Deviation
Total deviation: Y  Y
Y  Y : Regression Deviation
y

x1

x2
Total Deviation = Regression Deviation + Error Deviation
SST = SSR + SSE
7-13 The F Test of a Multiple
Regression Model
A statistical test for the existence of a linear relationship between Y and any or
all of the independent variables X1, x2, ..., Xk:
H0: 1 = 2 = ...= k=0
H1: Not all the i (i=1,2,...,k) are 0
Source of Sum of Degrees of
Variation Squares Freedom Mean Square F Ratio

Regression SSR (k) SSR


MSR 
k
Error SSE (n-(k+1)) SSE
=(n-k-1) MSE 
( n  ( k  1))
Total SST (n-1) SST
MST 
( n  1)
Using the Computer: Analysis of
Variance Table (Example 7-3)
Analysis of Variance

SOURCE DF SS MS F
p
Regression 2 630.54 315.27 86.34
0.000
Error 7 25.56 3.65
Total 9 656.10
F Distribution with 2 and 7 Degrees of Freedom The test statistic, F = 86.34, is
f(F)
greater than the critical point of F(2, 7)
Test statistic 86.34
for any common level of
significance (p-value 0), so the null
hypothesis is rejected, and we
=0.01
might conclude that the dependent
F variable is related to one or more of
0
F0.01=9.55 the independent variables.
7-14 How Good is the Regression
y The mean square error is an unbiased
estimator of the variance of the population
2
errors,  , denoted by  :
SSE  ( y  y) 2
MSE  
( n  ( k  1)) ( n  ( k  1))
x1
Standard error of estimate:
x2 Errors: y - y s= MSE

2
The multiple coefficient of determination, R , measures the proportion of
the variation in the dependent variable that is explained by the combination
of the independent variables in the multiple regression model:
2 SSR SSE
R = =1-
SST SST
Decomposition of the Sum of
Squares and the Adjusted
Coefficient of Determination
SST
SSR SSE
2 SSR SSE
R = = 1-
SST SST
2
The adjusted multiple coefficient of determination , R , is the coefficient of
determination with the SSE and SST divided by their respective degrees of freedom:
SSE
2 (n - (k + 1))
R = 1-
SST
(n - 1)
Example 11-1: s = 1.911 R-sq = 96.1% R-sq(adj) = 95.0%
Measures of Performance in
Multiple Regression and the
ANOVA Table
Source of Sum of Degrees of
Variation Squares Freedom Mean Square F Ratio
MSR
Regression SSR (k) SSR F 
MSR  MSE
k
Error SSE (n-(k+1)) SSE
MSE 
=(n-k-1) ( n  ( k  1))
Total SST (n-1) SST
MST 
( n  1)

SSE
SSR SSE 2
2 R ( n  ( k  1)) 2 (n - (k + 1)) MSE
R = = 1- F  R =1- =
SST SST 2 (k ) MST
(1  R ) SST
(n - 1)
7-15 Tests of the Significance of
Individual Regression Parameters
Hypothesis tests about individual regression slope
parameters:
(1) H0: 1=0
H1: 10
(2) H0: 2=0
H1: 20
.
.
.
(k) H0: k=0
H1: k0
b 0
Test statistic for test i: t ( n  ( k 1 )
 i

s(b )i
Regression Results for
Individual Parameters
Coefficient Standard
Variable Estimate Error t-Statistic
Constant 53.12 5.43 9.783
*
X1 2.03 0.22 9.227
*
X2 5.60 1.30 4.308
*
X3 10.35 6.88 1.504
X4 3.45 2.70 1.259
X5 -4.25 0.38 11.184
*
n=150 t0.025=1.96
Example 7-3: Using the Computer
MTB > regress 'Y' on 2 predictors 'X1' 'X2'

Regression Analysis

The regression equation is


Y = 47.2 + 1.60 X1 + 1.15 X2

Predictor Coef Stdev t-ratio p


Constant 47.165 2.470 19.09 0.000
X1 1.5990 0.2810 5.69 0.000
X2 1.1487 0.3052 3.76 0.007

s = 1.911 R-sq = 96.1% R-sq(adj) = 95.0%

Analysis of Variance

SOURCE DF SS MS F
p
Regression 2 630.54 315.27 86.34
0.000
Error 7 25.56 3.65
Total 9 656.10

SOURCE DF SEQ SS
X1 1 578.82
X2 1 51.72
Using the Computer: Example 7-4
MTB > READ ‘a:\data\c11_t6.dat’ C1-C5
MTB > NAME c1 'EXPORTS' c2 'M1' c3 'LEND' c4 'PRICE' C5 'EXCHANGE'
MTB > REGRESS 'EXPORTS' on 4 predictors 'M1' 'LEND' 'PRICE' 'EXCHANGE'

Regression Analysis

The regression equation is


EXPORTS = - 4.02 + 0.368 M1 + 0.0047 LEND + 0.0365 PRICE + 0.27 EXCHANGE

Predictor Coef Stdev t-ratio p


Constant -4.015 2.766 -1.45 0.152
M1 0.36846 0.06385 5.77 0.000
LEND 0.00470 0.04922 0.10 0.924
PRICE 0.036511 0.009326 3.91 0.000
EXCHANGE 0.268 1.175 0.23 0.820

s = 0.3358 R-sq = 82.5% R-sq(adj) = 81.4%

Analysis of Variance

SOURCE DF SS MS F p
Regression 4 32.9463 8.2366 73.06 0.000
Error 62 6.9898 0.1127
Total 66 39.9361
Example 7-5: Three Predictors
MTB > REGRESS 'EXPORTS' on 3 predictors 'LEND' 'PRICE' 'EXCHANGE'

Regression Analysis

The regression equation is


EXPORTS = - 0.29 - 0.211 LEND + 0.0781 PRICE - 2.10 EXCHANGE

Predictor Coef Stdev t-ratio p


Constant -0.289 3.308 -0.09 0.931
LEND -0.21140 0.03929 -5.38 0.000
PRICE 0.078148 0.007268 10.75 0.000
EXCHANGE -2.095 1.355 -1.55 0.127

s = 0.4130 R-sq = 73.1% R-sq(adj) = 71.8%

Analysis of Variance

SOURCE DF SS MS F p
Regression 3 29.1919 9.7306 57.06 0.000
Error 63 10.7442 0.1705
Total 66 39.9361
Example 7-5: Two Predictors
MTB > REGRESS 'EXPORTS' on 2 predictors 'M1' 'PRICE'

Regression Analysis

The regression equation is


EXPORTS = - 3.42 + 0.361 M1 + 0.0370 PRICE

Predictor Coef Stdev t-ratio p


Constant -3.4230 0.5409 -6.33 0.000
M1 0.36142 0.03925 9.21 0.000
PRICE 0.037033 0.004094 9.05 0.000

s = 0.3306 R-sq = 82.5% R-sq(adj) = 81.9%

Analysis of Variance

SOURCE DF SS MS F p
Regression 2 32.940 16.470 150.67 0.000
Error 64 6.996 0.109
Total 66 39.936
7-16 Investigating the Validity
of the Regression Model:
Residual Plots
1 1
RE SID UAL

RE SID UAL
0 0

-1 -1
5 6 7 8 9 110 120 130 140 150 160
M1 P RIC E

Residuals Plotted Against M1 Residuals Plotted Against Price


(Apparently Random) (Apparent Heteroscedasticity)
Investigating the Validity of the
Regression: Residual Plots (2)
1
1

RESID UA L
RE SIDUAL

0
0

-1
-1
3 4 5
0 10 20 30 40 50 60 70
TIME Y-HAT

Residuals Plotted Against Time Residuals Plotted Against Fitted Values


(Apparently Random) (Apparent Heteroscedasticity)
Histogram of Standardized
Residuals: Example 7-6
MTB > Histogram 'SRES1'.
Histogram of SRES1 N = 67

Midpoint Count
Standardized Residuals:
-3.0
-2.5
1 *
1 *
i
-2.0 3 *** ~ N ( 0 ,1)
-1.5 1 * 
-1.0 5 *****
-0.5 13 *************
0.0 19 *******************
0.5 12 ************
1.0 6 ******
1.5 3 ***
2.0 2 **
2.5 0
3.0 1 *
Investigating the Validity of the
Regression: Outliers and
Influential Observations
Regression line Point with a
y without outlier y large value of xi

. . *
.
.. .. Regression
Regression line

. .. ..
when all data
line with
outlier
. .. . are included
.. . .. .. .. .
. . .. .
No relationship
in this cluster
* Outlier
x x
Outliers Influential Observations
Outliers and Influential
Observations: Example 7-6
Unusual Observations
Obs. M1 EXPORTS Fit Stdev.Fit Residual
St.Resid
1 5.10 2.6000 2.6420 0.1288 -0.0420 -0.14 X
2 4.90 2.6000 2.6438 0.1234 -0.0438 -0.14 X
25 6.20 5.5000 4.5949 0.0676 0.9051 2.80R
26 6.30 3.7000 4.6311 0.0651 -0.9311 -2.87R
50 8.30 4.3000 5.1317 0.0648 -0.8317 -2.57R
67 8.20 5.6000 4.9474 0.0668 0.6526 2.02R

R denotes an obs. with a large st. resid.


X denotes an obs. whose X value gives it large influence.
7-17 Using the Multiple
Regression Model for
Prediction
Sales Estimated Regression Plane for Example 11-1

89.76

Advertising

18.00

63.42
8.00
Promotions 12 3
Prediction in Multiple
Regression
A (1 - a) 100% prediction interval for a value of Y given values of X :
i
y  t  s 2 ( y)  MSE
( ,( n  ( k 1)))
2

A (1 - a) 100% prediction interval for the conditional mean of Y given


values of X :
i
y  t  s[ E (Y )]
( ,( n  ( k 1)))
2

MTB > regress 'EXPORTS' 2 'M1' 'PRICE';


SUBC> predict 6 160;
SUBC> predict 5 150;
SUBC> predict 4 130.
Fit Stdev.Fit 95.0% C.I. 95.0% P.I.
4.6708 0.0853 ( 4.5003, 4.8412) ( 3.9885, 5.3530)
3.9390 0.0901 ( 3.7590, 4.1190) ( 3.2543, 4.6237)
2.8370 0.1116 ( 2.6140, 3.0599) ( 2.1397, 3.5342)
7-18 Qualitative (or
Categorical) Independent
Variables (in Regression)
An indicator (dummy, binary) variable of qualitative level A:
1 if level A is obtained
Xh  
0 if level A is not obtained
MOVIE EARN COST PROM BOOK MTB > regress 'EARN’ 3 'COST' 'PROM’ 'BOOK'
1 28 4.2 1.0 0
2 35 6.0 3.0 1 Regression Analysis
3 50 5.5 6.0 1
4 20 3.3 1.0 0 The regression equation is
5 75 12.5 11.0 1 EARN = 7.84 + 2.85 COST + 2.28 PROM + 7.17 BOOK
6 60 9.6 8.0 1
7 15 2.5 0.5 0 Predictor Coef Stdev t-ratio p
8 45 10.8 5.0 0 Constant 7.836 2.333 3.36 0.004
9 50 8.4 3.0 1 COST 2.8477 0.3923 7.26 0.000
10 34 6.6 2.0 0 PROM 2.2782 0.2534 8.99 0.000
11 48 10.7 1.0 1 BOOK 7.166 1.818 3.94 0.001
12 82 11.0 15.0 1
13 24 3.5 4.0 0 s = 3.690 R-sq = 96.7% R-sq(adj) = 96.0%
14 50 6.9 10.0 0
15 58 7.8 9.0 1 Analysis of Variance
16 63 10.1 10.0 0
17 30 5.0 1.0 1 SOURCE DF SS MS F p
18 37 7.5 5.0 0 Regression 3 6325.2 2108.4 154.89 0.000
19 45 6.4 8.0 1 Error 16 217.8 13.6
20 72 10.0 12.0 1 Total 19 6543.0
Picturing Qualitative Variables
in Regression
Y y
Line for X2=1
b3

b0+b2
Line for X2=0

b0
x1

X1 x2
A regression with one A multiple regression with two
quantitative variable (X1) and quantitative variables (X1 and X2)
one qualitative variable (X2): and one qualitative variable (X3):
y  b  b x  b x
0 1 1 2 2
y  b  b x  b x  b x
0 1 1 2 2 3 3
Picturing Qualitative Variables in
Regression: Three Categories and
Two Dummy Variables
Y
Line for X = 0 and X3 = 1 A qualitative
variable with r
levels or
Line for X2 = 1 and X3 = 0 categories is
represented with
b0+b3
Line for X2 = 0 and X3 = 0 (r-1) 0/1 (dummy)
variables.
b0+b2

b0
X1
Category X2 X3
A regression with one quantitative variable (X1) and Adventure 0 0
two qualitative variables (X2 and X2): Drama 0 1
y  b  b x  b x  b x
0 1 1 2 2 3 3
Romance 1 0
Using Qualitative Variables in
Regression: Example 7-6

Salary = 8547 + 949 Education + 1258 Experience - 3256 Gender


(SE) (32.6) (45.1) (78.5) (212.4)
(t) (262.2) (21.0) (16.0) (-15.3)

1 if Female On average, female salaries


Gender  
0 if Male are $3256 below male
salaries
Interactions between
Quantitative and Qualitative
Variables: Shifting Slopes
Line for X2=0
Y

Slope = b1 Line for X2=1

b0

Slope = b1+b3

b0+b2
X1

A regression with interaction between a quantitative


variable (X1) and a qualitative variable (X2 ):

y  b  b x  b x  b x x
0 1 1 2 2 3 1 2
7-19 Polynomial Regression
One-variable polynomial regression model:
Y=0+1 X + 2X2 + 3X3 +. . . + mXm +
where m is the degree of the polynomial - the highest power of X appearing in the
equation. The degree of the polynomial is the order of the model.

Y Y
y  b  b X
y  b  b X
0 1
0 1

y  b  b X  b X
0 1 2
2

(b  0) y  b  b X  b X  b X
0 1 2
2
3
3

X1 X1
Polynomial Regression:
Example 7-7
25 MTB > regress sales' 2 'advert’ 'advsqr'

Regression Analysis
SALES

The regression equation is


15
SALES = 3.52 + 2.51 ADVERT - 0.0875 ADVSQR

Predictor Coef Stdev t-ratio p


Constant 3.5150 0.7385 4.76 0.000
5 ADVERT 2.5148 0.2580 9.75 0.000
0 5 10 15 ADVSQR -0.08745 0.01658 -5.28 0.000
ADVERT
s = 1.228 R-sq = 95.9% R-sq(adj) = 95.4%

Analysis of Variance

SOURCE DF SS MS F p
Regression 2 630.26 315.13 208.99 0.000
Error 18 27.14 1.51
Total 20 657.40
Polynomial Regression: Other
Variables and Cross-Product
Terms

Variable Estimate Standard Error T-statistic


X1 2.34 0.92 2.54
X2 3.11 1.05 2.96
X12 4.22 1.00 4.22
X22 3.57 2.12 1.68
X1X2 2.77 2.30 1.20
7-20 Nonlinear Models and
Transformations:
Multiplicative Model
The multiplicative model:
Y   X X X 
0 1
1

2
2

3
3

The logarithmic transformation:


log Y  log    log X   log X   log X  log 
0 1 1 2 2 3 3

MTB > loge c1 c3


MTB > loge c2 c4
MTB > name c3 'LOGSALE' c4 'LOGADV'
MTB > regress 'logsale' 1 'logadv'
Regression Analysis
The regression equation is
LOGSALE = 1.70 + 0.553 LOGADV

Predictor Coef Stdev t-ratio p


Constant 1.70082 0.05123 33.20 0.000
LOGADV 0.55314 0.03011 18.37 0.000

s = 0.1125 R-sq = 94.7% R-sq(adj) = 94.4%

Analysis of Variance
SOURCE DF SS MS F p
Regression 1 4.2722 4.2722 337.56 0.000
Error 19 0.2405 0.0127
Total 20 4.5126
Transformations:
Exponential Model
The exponential model:
Y   e 
0
1X

The logarithmic transformation:


log Y  log    X  log  0 1 1

MTB > regress 'sales' 1 'logadv'

Regression Analysis
The regression equation is
SALES = 3.67 + 6.78 LOGADV

Predictor Coef Stdev t-ratio p


Constant 3.6683 0.4016 9.13 0.000
LOGADV 6.7840 0.2360 28.74 0.000

s = 0.8819 R-sq = 97.8% R-sq(adj) = 97.6%

Analysis of Variance

SOURCE DF SS MS F p
Regression 1 642.62 642.62 826.24 0.000
Error 19 14.78 0.78
Total 20 657.40
Plots of Transformed Variables
Sim ple Regression of Sales on Ad vertising Regression of Sales on Log(Advertising)

30 25

20
SALES

SALES
15

Y = 6 .59 2 71 + 1.19 176 X Y = 3.6 6 8 2 5 + 6 .78 4 X


10 R- Sq uared = 0 .8 9 5 R- Sq uared = 0 .978
5

0 5 10 15 0 1 2 3
ADVERT LOGADV

Regression of Log(Sales) on Log(Advertising) Residual Plots: Sales vs Log(Advertising)


1.5
3.5

0.5
RESIDS
LOGSALE

2.5
Y = 1.70 0 8 2 + 0 .5 53 13 6 X -0.5

R- Sq uared = 0 .9 47
-1.5
1.5
2 12 22
0 1 2 3
LOGADV Y-HAT
Variance Stabilizing
Transformations
• Square root transformation: Y  Y
 Useful when the variance of the regression errors is
approximately proportional to the conditional mean of Y.
• Logarithmic transformation: Y   log(Y )
 Useful when the variance of regression errors is approximately
proportional to the square of the conditional mean of Y.
• Reciprocal transformation:
1
 
 Useful when the variance of the regression errors is
Y
Y
approximately proportional to the fourth power of the conditional
mean of Y.
Regression with Dependent
Indicator Variables
The logistic function:
e (  X )
0 1

E (Y X ) 
1  e (  X )
0 1

Transformation to linearize the logistic function:


 p 
p   log 
 1  p

y Logistic Function
1

0
x
7.21 Multicollinearity
x2

x2 x1
x1
Orthogonal X variables Perfectly collinear X
provide information from variables provide identical
independent sources. No information content. No
multicollinearity. regression.

x2
x2
x1 x1
Some degree of collinearity.
A high degree of negative
Problems with regression
collinearity also causes
depend on the degree of
problems with regression.
collinearity.
Effects of Multicollinearity
• Variances of regression coefficients are inflated.
• Magnitudes of regression coefficients may be
different from what are expected.
• Signs of regression coefficients may not be as
expected.
• Adding or removing variables produces large
changes in coefficients.
• Removing a data point may cause large changes in
coefficient estimates or signs.
• In some cases, the F ratio may be significant while
the t ratios are not.
Detecting the Existence of
Multicollinearity: Correlation Matrix of
Independent Variables and Variance
Inflation Factors
MTB > CORRELATION 'm1' 'lend’ 'price’ 'exchange'

Correlations (Pearson)

M1 LEND PRICE
LEND -0.112
PRICE 0.447 0.745
EXCHANGE -0.410 -0.279 -0.420

MTB > regress 'exports' on 4 predictors 'm1’ 'lend’ 'price’ 'exchange';


SUBC> vif.

Regression Analysis
The regression equation is
EXPORTS = - 4.02 + 0.368 M1 + 0.0047 LEND + 0.0365 PRICE + 0.27 EXCHANGE

Predictor Coef Stdev t-ratio p VIF


Constant -4.015 2.766 -1.45 0.152
M1 0.36846 0.06385 5.77 0.000 3.2
LEND 0.00470 0.04922 0.10 0.924 5.4
PRICE 0.036511 0.009326 3.91 0.000 6.3
EXCHANGE 0.268 1.175 0.23 0.820 1.4

s = 0.3358 R-sq = 82.5% R-sq(adj) = 81.4%


Variance Inflation Factor
The variance inflation factor associated with X h :
1
VIF ( X h ) 
1  Rh2
where R 2h is the R 2 value obtained for the regression of X on
the other independent variables.
Relationship between VIF and R h2
VIF100

50

0
0.0 0.5 1.0 Rh2
Solutions to the
Multicollinearity Problem
• Drop a collinear variable from the
regression.
• Change in sampling plan to include
elements outside the multicollinearity
range.
• Transformations of variables.
• Ridge regression.
7-22 Residual Autocorrelation
and the Durbin-Watson Test
An autocorrelation is a correlation of the values of a variable
with values of the same variable lagged one or more periods
back. Consequences of autocorrelation include inaccurate
estimates of variances and inaccurate predictions.
Lagged Residuals The Durbin-Watson test (first-order
i i i-1 i-2 i-3 i-4
autocorrelation):
1 1.0 * * * * H0 : 1 = 0
2
3
0.0
-1.0
1.0
0.0
*
1.0
*
*
*
*
H1:0
4 2.0 -1.0 0.0 1.0 * The Durbin-Watson
n test 2statistic:
5 3.0 2.0 -1.0 0.0 1.0  ( ei  ei 1 )
d  i2 n
6 -2.0 3.0 2.0 -1.0 0.0
7 1.0 -2.0 3.0 2.0 -1.0 2
8 1.5 1.0 -2.0 3.0 2.0  ei
9 1.0 1.5 1.0 -2.0 3.0 i 1
10 -2.5 1.0 1.5 1.0 -2.0
Critical Points of the Durbin-Watson
Statistic: =0.05, n= Sample Size, k =
Number of Independent Variables

k=1 k=2 k=3 k=4 k=5


n dL dU dL dU dL dU dL dU dL dU
15 1.08 1.36 0.95 1.54 0.82 1.75 0.69 1.97 0.56 2.21
16 1.10 1.37 0.98 1.54 0.86 1.73 0.74 1.93 0.62 2.15
17 1.13 1.38 1.02 1.54 0.90 1.71 0.78 1.90 0.67 2.10
18 1.16 1.39 1.05 1.53 0.93 1.69 0.82 1.87 0.71 2.06
. . . . . .
. . . . . .
. . . . . .
65 1.57 1.63 1.54 1.66 1.50 1.70 1.47 1.73 1.44 1.77
70 1.58 1.64 1.55 1.67 1.52 1.70 1.49 1.74 1.46 1.77
75 1.60 1.65 1.57 1.68 1.54 1.71 1.51 1.74 1.49 1.77
80 1.61 1.66 1.59 1.69 1.56 1.72 1.53 1.74 1.51 1.77
85 1.62 1.67 1.60 1.70 1.57 1.72 1.55 1.75 1.52 1.77
90 1.63 1.68 1.61 1.70 1.59 1.73 1.57 1.75 1.54 1.78
95 1.64 1.69 1.62 1.71 1.60 1.73 1.58 1.75 1.56 1.78
100 1.65 1.69 1.63 1.72 1.61 1.74 1.59 1.76 1.57 1.78
Using the Durbin-Watson
Statistic
MTB > regress 'EXPORTS' 4 'M1' 'LEND' 'PRICE' 'EXCHANGE';
SUBC> dw.

Durbin-Watson statistic = 2.58

Positive Test is No Test is Negative


Autocorrelation Inconclusive Autocorrelation Inconclusive Autocorrelation

0 dL dU 4-dU 4-dL 4

For n = 67, k = 4: dU1.73 4-dU2.27


dL1.47 4- dL2.53 < 2.58
H0 is rejected, and we conclude there is negative first-
order autocorrelation.
7-23 Partial F Tests and
Variable Selection Methods
Full model:
Y = 0 + 1 X1 + 2 X2 + 3 X3 + 4 X4 + 
Reduced model:
Y = 0 + 1 X1 + 2 X2 + 

Partial F test:
H0: 3 = 4 = 0
H1: 3 and 4 not both 0
(SSE  SSE ) / r
R F
Partial F statistic: F 
(r, (n  (k  1)) MSE
F

where SSER is the sum of squared errors of the reduced model, SSEF is the sum
of squared errors of the full model; MSEF is the mean square error of the full
model [MSEF = SSEF/(n-(k+1))]; r is the number of variables dropped from the full
model.
Variable Selection Methods
• All possible regressions
 Run regressions with all possible combinations of independent
variables and select best model.
• Stepwise procedures
 Forward selection
Add one variable at a time to the model, on the basis of its F
statistic.
 Backward elimination
Remove one variable at a time, on the basis of its F statistic.
 Stepwise regression
Adds variables to the model and subtracts variables from the
model, on the basis of the F statistic.
Stepwise Regression
Compute F statistic for each variable not in the model

Is there at least one variable with p-value > Pin? No


Stop
Yes
Enter most significant (smallest p-value) variable into model

Calculate partial F for all variables in the model

Remove
Is there a variable with p-value > Pout?
variable
No
Stepwise Regression: Using
the Computer
MTB > STEPWISE 'EXPORTS' PREDICTORS 'M1’ 'LEND' 'PRICE’ 'EXCHANGE'

Stepwise Regression

F-to-Enter: 4.00 F-to-Remove: 4.00

Response is EXPORTS on 4 predictors, with N = 67

Step 1 2
Constant 0.9348 -3.4230

M1 0.520 0.361
T-Ratio 9.89 9.21

PRICE 0.0370
T-Ratio 9.05

S 0.495 0.331
R-Sq 60.08 82.48
Using the Computer: MINITAB
MTB > REGRESS 'EXPORTS’ 4 'M1’ 'LEND’ 'PRICE' 'EXCHANGE';
SUBC> vif;
SUBC> dw.
Regression Analysis
The regression equation is
EXPORTS = - 4.02 + 0.368 M1 + 0.0047 LEND + 0.0365 PRICE + 0.27 EXCHANGE

Predictor Coef Stdev t-ratio p VIF


Constant -4.015 2.766 -1.45 0.152
M1 0.36846 0.06385 5.77 0.000 3.2
LEND 0.00470 0.04922 0.10 0.924 5.4
PRICE 0.036511 0.009326 3.91 0.000 6.3
EXCHANGE 0.268 1.175 0.23 0.820 1.4

s = 0.3358 R-sq = 82.5% R-sq(adj) = 81.4%

Analysis of Variance

SOURCE DF SS MS F p
Regression 4 32.9463 8.2366 73.06 0.000
Error 62 6.9898 0.1127
Total 66 39.9361

Durbin-Watson statistic = 2.58


Using the Computer: SAS
data exports;
infile 'c:\aczel\data\c11_t6.dat';
input exports m1 lend price exchange;
proc reg data = exports;
model exports=m1 lend price exchange/dw vif;
run;

Model: MODEL1
Dependent Variable: EXPORTS

Analysis of Variance

Sum of Mean
Source DF Squares Square F Value Prob>F

Model 4 32.94634 8.23658 73.059 0.0001


Error 62 6.98978 0.11274
C Total 66 39.93612

Root MSE 0.33577 R-square 0.8250


Dep Mean 4.52836 Adj R-sq 0.8137
C.V. 7.41473
Using the Computer: SAS
(continued)
Parameter Estimates

Parameter Standard T for H0:


Variable DF Estimate Error Parameter=0 Prob > |T|

INTERCEP 1 -4.015461 2.76640057 -1.452 0.1517


M1 1 0.368456 0.06384841 5.771 0.0001
LEND 1 0.004702 0.04922186 0.096 0.9242
PRICE 1 0.036511 0.00932601 3.915 0.0002
EXCHANGE 1 0.267896 1.17544016 0.228 0.8205

Variance
Variable DF Inflation

INTERCEP 1 0.00000000
M1 1 3.20719533
LEND 1 5.35391367
PRICE 1 6.28873181
EXCHANGE 1 1.38570639

Durbin-Watson D 2.583
(For Number of Obs.) 67
1st Order Autocorrelation -0.321
The Matrix Approach to
Regression Analysis (1)
The population regression model:

y1  1 x
11
x12
x ... x    
13 1k 1  1 
y  1 x x x ... x      
 2
  21 22 23 2k
 2
  2

y 3
 1 x 31
x32
x ... x    
33 3k 3
 3

.   . . . . . .   .   . 
      
.  . . . . . .  .  . 
      
.  . . . . . .  .  . 

y k

 
1 x n1
xn2
x ... x 
n3 
nk  
k  
 k

Y  X  
The estimated regression model:
Y = Xb + e
The Matrix Approach to
Regression Analysis (2)
The normal equations:
X Xb  X Y

Estimators:
b  ( X X ) X Y
1

Predicted values:
Y  Xb  X ( X X ) X Y  HY
1

V (b)   ( X X )
2 1

s (b)  MSE ( X X )
2 1

You might also like