Chapter4 Solutions
Chapter4 Solutions
Chapter4 Solutions
1. In the same way as we make assumptions about the true value of beta and
not the estimated values, we make assumptions about the true unobservable
disturbance terms rather than their estimated counterparts, the residuals.
We know the exact value of the residuals, since they are defined by .
So we do not need to make any assumptions about the residuals since we
already know their value. We make assumptions about the unobservable error
terms since it is always the true value of the population disturbances that we
are really interested in, although we never actually know what these are.
3. The t-ratios for the coefficients in this model are given in the third row after
the standard errors. They are calculated by dividing the individual coefficients
by their standard errors.
The problem appears to be that the regression parameters are all individually
insignificant (i.e. not significantly different from zero), although the value of
R2 and its adjusted version are both very high, so that the regression taken as a
whole seems to indicate a good fit. This looks like a classic example of what we
term near multicollinearity. This is where the individual regressors are very
closely related, so that it becomes difficult to disentangle the effect of each
individual variable upon the dependent variable.
The solution to near multicollinearity that is usually suggested is that since the
problem is really one of insufficient information in the sample to determine
each of the coefficients, then one should go out and get more data. In other
words, we should switch to a higher frequency of data for analysis (e.g. weekly
Other, more ad hoc methods for dealing with the possible existence of near
multicollinearity were discussed in Chapter 4:
- Ignore it: if the model is otherwise adequate, i.e. statistically and in terms
of each coefficient being of a plausible magnitude and having an
appropriate sign. Sometimes, the existence of multicollinearity does not
reduce the t-ratios on variables that would have been significant without
the multicollinearity sufficiently to make them insignificant. It is worth
stating that the presence of near multicollinearity does not affect the BLUE
properties of the OLS estimator – i.e. it will still be consistent, unbiased
and efficient since the presence of near multicollinearity does not violate
any of the CLRM assumptions 1-4. However, in the presence of near
multicollinearity, it will be hard to obtain small standard errors. This will
not matter if the aim of the model-building exercise is to produce forecasts
from the estimated model, since the forecasts will be unaffected by the
presence of near multicollinearity so long as this relationship between the
explanatory variables continues to hold over the forecasted sample.
- Transform the highly correlated variables into a ratio and include only the
ratio and not the individual variables in the regression. Again, this may be
unacceptable if financial theory suggests that changes in the dependent
variable should occur following changes in the individual explanatory
variables, and not a ratio of them.
(b) The coefficient estimates would still be the “correct” ones (assuming that
the other assumptions required to demonstrate OLS optimality are satisfied),
but the problem would be that the standard errors could be wrong. Hence if
we were trying to test hypotheses about the true parameter values, we could
end up drawing the wrong conclusions. In fact, for all of the variables except
the constant, the standard errors would typically be too small, so that we
would end up rejecting the null hypothesis too many times.
- Transforming the data into logs, which has the effect of reducing the effect of
large errors relative to small ones.
5. (a) This is where there is a relationship between the ith and jth residuals.
Recall that one of the assumptions of the CLRM was that such a relationship
did not exist. We want our residuals to be random, and if there is evidence of
autocorrelation in the residuals, then it implies that we could predict the sign
of the next residual and get the right answer more than half the time on
average!
(b) The Durbin Watson test is a test for first order autocorrelation. The test is
calculated as follows. You would run whatever regression you were interested
in, and obtain the residuals. Then calculate the statistic
You would then need to look up the two critical values from the Durbin
Watson tables, and these would depend on how many variables and how
many observations and how many regressors (excluding the constant this
time) you had in the model.
(d)
(e)
The answer is yes, there is no reason why we cannot use Durbin Watson in
this case. You may have said no here because there are lagged values of the
regressors (the x variables) variables in the regression. In fact this would be
wrong since there are no lags of the DEPENDENT (y) variable and hence DW
can still be used.
6.
The major steps involved in calculating the long run solution are to
- remove all difference terms altogether since these will all be zero by the
definition of the long run in this context.
We now want to rearrange this to have all the terms in x2 together and so that
y is the subject of the formula:
If this still fails, then we really have to admit that the relationship between the
dependent variable and the independent variables was probably not linear
after all so that we have to either estimate a non-linear model for the data
(which is beyond the scope of this course) or we have to go back to the
drawing board and run a different regression containing different variables.
(b) One solution would be to use a technique for estimation and inference
which did not require normality. But these techniques are often highly
complex and also their properties are not so well understood, so we do not
know with such certainty how well the methods will perform in different
circumstances.
One pragmatic approach to failing the normality test is to plot the estimated
residuals of the model, and look for one or more very extreme outliers. These
would be residuals that are much “bigger” (either very big and positive, or very
big and negative) than the rest. It is, fortunately for us, often the case that one
or two very extreme outliers will cause a violation of the normality
assumption. The reason that one or two extreme outliers can cause a violation
of the normality assumption is that they would lead the (absolute value of the)
skewness and / or kurtosis estimates to be very large.
Once we spot a few extreme residuals, we should look at the dates when these
outliers occurred. If we have a good theoretical reason for doing so, we can
add in separate dummy variables for big outliers caused by, for example, wars,
changes of government, stock market crashes, changes in market
microstructure (e.g. the “big bang” of 1986). The effect of the dummy variable
is exactly the same as if we had removed the observation from the sample
altogether and estimated the regression on the remainder. If we only remove
observations in this way, then we make sure that we do not lose any useful
pieces of information represented by sample points.
(b) 1981M1-1995M12
rt = 0.0215 + 1.491 rmt RSS=0.189 T=180
1981M1-1987M10
rt = 0.0163 + 1.308 rmt RSS=0.079 T=82
1987M11-1995M12
rt = 0.0360 + 1.613 rmt RSS=0.082 T=98
(c) If we define the coefficient estimates for the first and second halves of the
sample as 1 and 1, and 2 and 2 respectively, then the null and alternative
hypotheses are
H0 : 1 = 2 and 1 = 2
and H1 : 1 2 or 1 2
Test stat. =
First, the forward predictive failure test - i.e. we are trying to see if the model
for 1981M1-1994M12 can predict 1995M1-1995M12.
The test statistic is given by
Now we need to be a little careful in our interpretation of what exactly are the
“first” and “second” sample periods. It would be possible to define T1 as
always being the first sample period. But I think it easier to say that T1 is
always the sample over which we estimate the model (even though it now
comes after the hold-out-sample). Thus T2 is still the sample that we are trying
to predict, even though it comes first. You can use either notation, but you
need to be clear and consistent. If you wanted to choose the other way to the
one I suggest, then you would need to change the subscript 1 everywhere in
the formula above so that it was 2, and change every 2 so that it was a 1.
Either way, we conclude that there is little evidence against the null
hypothesis. Thus our model is able to adequately back-cast the first 12
observations of the sample.
12. An outlier dummy variable will take the value one for one observation in
the sample and zero for all others. The Chow test involves splitting the sample
into two parts. If we then try to run the regression on both the sub-parts but
the model contains such an outlier dummy, then the observations on that