Solutions Chapter 9
Solutions Chapter 9
Solutions Chapter 9
Exercise Solutions
200
EXERCISE 9.1
From the equation for the AR(1) error model et = et 1 + vt , we have
e2 (1 2 ) = v2
and hence
e2 =
v2
1 2
E ( et et 1 ) = E ( et21 ) + 0 = e2
Similarly,
et et 2 = et 1et 2 + et 2 vt
and
E ( et et 2 ) = E ( et 1et 2 ) + 0 = E ( et et 1 ) = 2 e2
201
202
EXERCISE 9.2
(a)
r1 =
et et 1
t =2
T
et21
t =2
(b)
(i)
0.0979
= 0.1946 ,
0.5032
r2 =
et et 2
t =3
T
et2 2
0.1008
= 0.2007
0.5023
t =3
203
EXERCISE 9.3
(a)
Equation (9.49) can be used to conduct two Lagrange multiplier tests for AR(1) errors.
The first test is to test whether the coefficient for et 1 is significantly different from zero.
The null hypothesis is H 0 : = 0. The value of the test statistic is
t=
0.428
= 2.219
0.201
The critical t value for a 5% level of significance and 23 degrees of freedom is 2.069.
Since 2.219 > 2.069, we reject the null hypothesis and conclude that first order
autocorrelation is present.
The second LM test examines whether or not the R 2 from (9.49) is significant. The null
hypothesis is again H 0 : = 0 and the test statistic value is
LM = T R 2 = 26 0.165 = 4.29
2
-distribution with a 5% critical value of
When the null hypothesis is true, LM has a (1)
3.84. Since 4.29 > 3.84, we reject the null hypothesis and conclude that first order
autoregressive errors exist.
(b)
Ignoring autocorrelation means the estimates will be unbiased but no longer the best
estimates. Furthermore, the standard errors will be incorrect resulting in misleading
confidence intervals and hypothesis tests. The standard errors from the model with AR(1)
errors are larger than the standard errors from the least-squares estimated model. Thus, it
is likely that the least squares estimates and standard errors have overstated the precision
of the estimates in the relationship between disposer shipments and durable goods
expenditure. If autocorrelation is ignored, the confidence intervals will be narrower than
the correct confidence intervals, and hypothesis tests will have a probability of a type 1
error that is greater than the specified significance level.
204
EXERCISE 9.4
(a)
(b)
corr(et , et 1 ) = = 0.9
(ii)
(iii)
e2 =
(i)
corr(et , et 1 ) = = 0.4
(ii)
(iii)
e2 =
v2
1
=
= 5.263
1 2 1 0.92
v2
1
=
= 1.190
2
1 1 0.42
When the correlation between the current and previous period error is weaker, the
correlations between the current error and the errors at more distant lags die out relatively
quickly, as is illustrated by a comparison of 4 = 0.6561 in part (a)(ii) with 4 = 0.0256 in
part (b)(ii). Also, the larger the correlation , the greater the variance e2 , as is illustrated
by a comparison of e2 = 5.263 in part (a)(iii) with e2 = 1.190 in part (b)(iii).
205
EXERCISE 9.5
(a)
(b)
(i)
eT +1 = eT
(ii)
eT + 2 = eT +1 = 2 eT
Equation (9.25) gives us the nonlinear least squares estimates of the coefficients
1 = 3.89877 and 2 = 0.88837 . The final observation in bangla.dat is A34 = 53.86,
P34 = 0.89 . Therefore, the nonlinear least squares residual for the last observation is
ln( AT +1 ) = 1 + 2 ln( PT +1 ) + eT ,
= 3.89877 + 0.88837 ln(1) + 0.08069 = 3.97946
Similarly, the forecast value for ln( AT + 2 ) is
ln( AT + 2 ) = 1 + 2 ln( PT + 2 ) + eT + 2
= 3.89877 + 0.88837 ln(1.2) + 0.03406 = 4.09480
206
In Chapter 4 we are told that there are two ways to forecast a dependent variable when the
left-hand side of the equation is in the form of the logarithm of that variable. The first
method is to calculate the natural predictor y n , which is the better predictor to use when
the sample size is small
y n = exp(b1 + b2 x)
The second method is to calculate a corrected predictor y c , which is the better predictor
to use when the sample size is large
y c = exp(b1 + b2 x + 2 / 2) = y n e
/2
Applied to our nonlinear least squares estimation, the natural predictors are
AT +1 = exp 1 + 2 ln( PT +1 ) + eT +1
AT + 2 = exp 1 + 2 ln( PT + 2 ) + eT + 2
AT +1 = exp 1 + 2 ln( PT +1 ) + eT +1 + v2 2
AT + 2 = exp 1 + 2 ln( PT + 2 ) + eT + 2 + v2 2
207
EXERCISE 9.6
We consider two ways to derive the lag weights, by recursive substitution and by equating
coefficients of the lag operator. Recursive substitution is tedious but does not require new
machinery. Using the lag operator requires new machinery, but is less tedious.
Recursive substitution
One way to find the required expressions for the lag weights is to use recursive
substitution on the ARDL model. Once we have substituted in enough lagged equations,
we can determine the lag weights by observation. Recursive substitution begins with the
current ARDL model
yt = + 3 xt 3 + 1 yt 1 + 2 yt 2 + 3 yt 3 + vt
(1)
yt 1 = + 3 xt 4 + 1 yt 2 + 2 yt 3 + 3 yt 4 + vt 1
(2)
yt = + 3 xt 3 + 1 ( + 3 xt 4 + 1 yt 2 + 2 yt 3 + 3 yt 4 + vt 1 )
+2 yt 2 + 3 yt 3 + vt
= + 3 xt 3 + 1 + 1 3 xt 4 + 12 yt 2 + 1 2 yt 3 + 1 3 yt 4
(3)
+1 vt 1 + 2 yt 2 + 3 yt 3 + vt
This process is repeated with the larger lags until the required xt s is reached. In this
model, we stop the process of recursive substitution after substituting in yt 3 . At this stage
it should be clear that further substitution would not involve any additional lags of the
independent variables xt s for s = 1, 2,3, 4,5 or 6 . This ensures that the expressions for the
lag weights that we determine will not change with further substation. Rearranging the
final equation should give an expression similar to
yt = + 1 + 12 + 13 + 12 + 2 + 12 + 3
+3 xt 3 + 13 xt 4 + 12 3 xt 5 + 2 3 xt 5 + 133 xt 6
+12 3 xt 6 + 12 3 xt 6 + 3 3 xt 6 + 14 yt 4 + 12 2 yt 4
+12 2 yt 4 + 12 2 yt 4 + 13 yt 4 + 22 yt 4 + 13 yt 4
(4)
208
s = 1s 1 + 2s 2 + 3 s 3 , for s 6
Using the lag operator
The lag operator L operates on a variable, say yt , such that Lyt = yt 1 . Although it will
seem like magic to you at first, it is possible to do algebra with the lag operator. In
particular, raising L to the power s, written Ls has the effect of lagging yt s times. That is,
Ls yt = yt s . With this little bit of knowledge, the model
yt = + 3 xt 3 + 1 yt 1 + 2 yt 2 + 3 yt 3 + vt
can be written as
(1 1 L 2 L2 3 L3 ) yt = + 3 L3 xt + vt
A bit more faith is required for the next step where we invert the left-hand side function of
the lag operator to obtain
yt = (1 1 L 2 L2 3 L3 ) 1 ( + 3 L3 xt + vt )
Now consider the infinite lag representation
yt = + s xt s + et = + 0 xt + 1 xt 1 + 2 xt 2 + 3 xt 3 + 4 xt 4 +
s =0
+ et
) xt + et
We now have two different equations for the same model, where yt is the left-hand side
variable for both of them. It follows that the right-hand sides must be equal
(1 1 L 2 L2 3 L3 ) 1 ( + 3 L3 xt + vt ) = + (0 + 1 L + 2 L2 + 3 L3 + 4 L4 +
) xt + et
and that
( + 3 L3 xt + vt ) = (1 1 L 2 L2 3 L3 ) + (0 + 1 L + 2 L2 + 3 L3 + 4 L4 +
The left hand side can be written as
) xt + et
209
(1 1 2 3 ) + (0 L0 + (1 10 ) L + (2 20 11 ) L2
+ (3 30 21 12 ) L3 + (4 31 22 13 ) L4
+(5 32 23 14 ) L5 + (6 33 24 15 ) L6 +
xt
+ (1 1 L 2 L2 3 L3 )et
Equating coefficients of like terms establishes the relationship between the ARDL model
and its infinite lag representation. For the constant and error term, we have
= (1 1 2 3 )
vt = (1 1 L 2 L2 3 L3 )et = et 1et 1 2 et 2 3et 3
For the lag weights we equate coefficients of equal powers of the lag operator
0 = 0
0 = 1 10
0 = 2 20 11
3 = 3 30 21 12
0 = 4 31 22 13
0 = 5 32 23 14
0 = 6 33 24 15
From these expressions we obtain
0 = 1 = 2 = 0
3 = 3
4 = 13
5 = 23 + 14
6 = 33 + 24 + 15
s = 3s 3 + 2s 2 + 1 s 1
for s 6
For this process to work, and for the lag weights to be valid, coefficients for long lags
must converge to zero.
The lag operator method will seem daunting at first, but it is worth the investment. You
can go crazy doing recursive substitution.
210
EXERCISE 9.7
(a)
To find the 95% confidence intervals, we first find the forecast error standard errors using
the expressions derived in Section 9.5
1 = v = 0.4293
2
1
+ 2
+ 12 + 1 = 0.4293
( 0.033
The model was estimated using monthly observations from January, 1985 to December,
2005, a total of 252 observations and 248 degrees of freedom. Confidence intervals were
constructed using the t-value t(0.975,248) = 1.9696 .
DIPT + j
Lower bound
Upper bound
January
February
0.6250
0.5765
0.42930
0.42953
0.221
0.269
1.471
1.422
March
0.4837
0.44143
0.386
1.353
211
EXERCISE 9.8
Equation (9.28) is the estimated version of the model
yt = + 0 xt + 1 xt 1 + 1 yt 1 + vt
In equation (9C.6) on page 266, expressions for the lag weights of the more general model
yt = + 0 xt + 1 xt 1 + 2 xt 2 + 3 xt 3 + 1 yt 1 + 2 yt 2 + vt
are given. Expressions for the lag weights for (9.28) can be obtained from the more
general ones by setting 2 = 3 = 2 = 0 . Doing so yields
Weight
Estimate
0 = 0
1 = 1 + 10
2 = 11
3 = 12
4 = 13
5 = 14
6 = 15
0.7766
0.2969
0.1200
0.0485
0.0196
0.0079
0.0032
From the lag weight distribution, we can see that the immediate effect of a temporary 1%
increase in the price of sugar cane is an increase in the area planted of 0.77%. In
subsequent periods there are negative effects on the amount of area planted. One period
after the temporary price increase the area planted decreases by 0.30%, the second period
lagged effect is a decreases of 0.12%, the third period lagged effect is a decrease of 0.05%,
the fourth period lagged effect is a decrease of 0.02%, and the fifth period lagged effect is
a negligible decrease of less than 0.01%. This estimated lag weight distribution suggests
that producers initially overreact to the price change. If the price increase was a sustained
one, the final equilibrium change in area would be less than that which occurred in the
current period.
Notice that we predict that producers will initially overreact to a price change because
< . If > , their initial response is one of under reaction. If = , we
1
1 0
1 0
1 0
have the AR(1) error model. There is no lagged response to price by itself, but there is a
lagged response to the error in the previous period.
212
EXERCISE 9.9
(a)
(b)
et = 1 + 2 ln(U t ) + et 1 + vt
or by using the statistic LM = T R 2 from this equation. The value of the F-statistic for
testing the significance of is F = 5.047 with a p-value 0.036. Also, LM = T R 2 =
24 0.19376 = 4.650 with a p-value 0.031. Since both p-values are less than 0.05, we
reject H 0 : = 0 at a 5% significance level and conclude that autocorrelation exists. The
existence of autocorrelation means the assumption that the et are independent is not
correct. This problem causes the confidence interval for 2 in part (a) to be incorrect; it
could convey a false sense of the reliability of b2
.
(c)
= 0.4486
(0.2029)
This confidence interval is slightly narrower than that given in part (a). A direct
comparison with the interval in part (a) is difficult because the least squares standard
errors are incorrect in the presence of AR(1) errors. However, one could conjecture that
the least squares confidence interval is larger than it should be implying unjustified
imprecision.
213
Correlations
.4
.2
.0
-.2
-.4
-.6
2
10
12
Lag
214
EXERCISE 9.10
(a)
( 0.8150 ) ( 0.1563)
( 0.3622 )
( 0.2085)
215
EXERCISE 9.11
(a)
The marginal cost is the first derivative of the total cost with respect to quantity
dTC
= 2 + 23Q = MC
dQ
The marginal revenue is the first derivative of the total revenue with respect to quantity
dTR
= 1 + 22Q = MR
dQ
(b)
To find the profit maximising quantity Q , equate the marginal revenue and the marginal
cost
MR = MC
1 + 22Q = 2 + 2 3Q
The estimated least squares regression for the total revenue function with standard errors
in parentheses is
TR = 174.2803Q 0.5024Q 2
( 4.5399 ) ( 0.0235)
The estimated least squares regression for the total cost function with standard errors in
parentheses is
and
the
errors
in
different
Q =
a2 b1
1.5784 174.2803
=
= 120
2 ( b2 a3 ) 2 ( 0.5024 0.2277 )
months
must
be
216
Assuming that the level of production for the next three months is based on the profit
maximising level of output
TR = 174.2803 120 0.5024 120 2 = 13679
TC = 2066.083 1.5784 120 + 0.2277 1202 = 5155
(e)
2 statistic. The same tests for AR(1) errors in the total revenue function yield F = 113
(p-value = 0.0000) on the significance of and LM = T R 2 = 34.4 (p-value = 0.0000)
using the 2 statistic. We conclude that both functions have correlated errors.
Examination of the correlograms of the residuals confirms this conclusion. The
significance bounds in the figures below are at 1.96 48 = 0.283 . We find that there
are several statistically significant correlations that exceed these bounds. In particular, r1 of
the total cost model and r1 , r2 , r3 and r4 of the total revenue model are statistically
significant. They also lead us to conclude that the errors are correlated.
Total revenue
.4
1.0
.3
0.8
.2
0.6
Correlations
Correlations
Total cost
.1
.0
0.4
0.2
-.1
0.0
-.2
-0.2
-.3
-0.4
2
6
Lag
10
12
Lag
Figure xr9.11 Residual correlograms for total cost and total revenue functions
10
12
217
TR = 171.58Q 0.5085Q 2
(se)
( 8.01) ( 0.0248)
( 0.0634 )
et = 0.4595et 1 + vt
( 0.1521)
The profit maximising level of output suggested by the results in part (f) is
Q =
(h)
et = 0.9495et 1 + vt
a2 b1
5.7521 171.5814
=
= 118
2 ( b2 a3 ) 2 ( 0.5085 0.2415 )
In this case, because the errors are assumed autocorrelated, the total revenue and total cost
errors for month 48 have a bearing on the predictions, and the predictions will be different
in each of the future three months.
For the total revenue function, the estimated error for month 48 is
Therefore, given Q = 118 for the next three months, the total cost predictions for the next
three months are given by
218
PROFITS 49 = 5349
PROFITS 50 = 5779
PROFITS 51 = 6031
Because eTR ,48 is negative, and its impact declines as we predict further into the future, the
total revenue predictions become larger the further into the future we predict. The opposite
happens with total cost; it declines because eTC ,48 is positive. Combining these two
influences means that the predictions for profit increase over time. These predictions are,
however, much lower than 8524, the prediction for profit that was obtained when
autocorrelation was ignored. Thus, even although autocorrelation has little impact on the
optimal setting for Q , it has considerable impact on the predictions of profit. This impact
is caused by a change in the coefficient estimates, a relatively large negative residual for
revenue in month 48, and a relatively large positive residual for cost in month 48.
219
EXERCISE 9.12
(a)
( 0.1006 )( 0.3129 )
( 0.3185 )
( 0.3221)
+ 0.5868ln( Pt 3 ) 0.0143ln( Pt 4 )
( 0.3153)
( 0.2985)
Multipliers
Lag
Delay
Interim
0
1
2
3
4
0.7746
0.2175
0.0026
0.5868
0.0143
0.7746
0.5572
0.5546
1.1414
1.1271
220
0 = 0
i=0
1 = 0 + 1
i =1
2 = 0 + 21
i=2
3 = 0 + 31
i=3
4 = 0 + 41
i=4
(0.1056) (0.2594)
(0.1088)
The least squares estimates of 0 and 1 are 0.4247 and -0.0996 respectively.
(d)
0 = a0 = 0.42467
1 = a0 + a1 = 0.42467 0.09963 = 0.3250
2 = a0 + 2a1 = 0.42467 2 0.09963 = 0.2254
3 = a0 + 3a1 = 0.42467 3 0.09963 = 0.1258
4 = a0 + 4a1 = 0.42467 4 0.09963 = 0.0261
These lag weights satisfy expectations as they are positive and diminish in magnitude as
the lag length increases. They imply that the adjustment to a sustained price change takes
place gradually, with the biggest impact being felt immediately and with a declining
impact being felt in subsequent periods. The linear constraint has fixed the original
problem where the signs and magnitudes of the lag weights varied unexpectedly.
221
Delay
Interim
0
1
2
3
4
0.4247
0.3250
0.2254
0.1258
0.0261
0.4247
0.7497
0.9751
1.1009
1.1270
These delay multipliers are all positive and steadily decrease as the lag becomes more
distant. This result, compared to the positive and negative multipliers obtained earlier, is a
more reasonable one. It is interesting that the total effect, given by the 4-year interim
multiplier, is almost identical in both cases, and the 3-year interim multipliers are very
similar. The earlier interim multipliers are quite different however, with the restricted
weights leading to a smaller initial impact.
222
EXERCISE 9.13
The estimated equation with standard errors in parentheses is
( 0.0601)
(i) yt is the predicted change in housing starts (in thousands) from month t 1 to month
t and xt is the change in interest rates from month t 1 to month t. The estimated
coefficients of xt and xt 1 are negative and suggest that the immediate effect of a
temporary one unit increase in xt is a decrease in yt of 12.30 and the one month
lagged effect is a decrease in yt of 27.06. The coefficient of yt 1 is negative,
suggesting that a positive change in housing starts lead to a negative change in
housing starts in the following period. These signs are generally in line with our
expectations. However, more implications of the signs and magnitudes of the
coefficients can be obtained by examining the lag weights in the infinite lag
representations as is done later in the question. The only coefficient estimate that is
significantly different from zero is 1 . Thus, with two important coefficients not
significantly different from zero (those for xt and xt 1 ), the model is not a reasonable
one.
(ii) Testing the hypothesis H 0 : 1 = 10 against the alternative H1 : 1 10 delivers
an F test value of 0.483861 with a p-value of 0.4873. Since the p value is greater that
the 0.05 level of significance, we do not reject the null hypothesis. On the basis of this
test, a restricted model of the form yt = 1 + 2 xt + et with AR(1) errors is reasonable.
(iii) A correlogram of the residuals is presented below. Significance bounds are drawn at
1.96 250 = 0.124 . Although not large, the correlations r2 , r12 and r24 are
statistically significant. Since the data are monthly, there could be annual effect that is
not being picked up. Overall, given these correlations and the insignificant coefficients
mentioned in part (a), further modeling is in order.
.15
.10
.05
Correlations
(a)
.00
-.05
-.10
-.15
-.20
-.25
5
10
15
20
Lag
223
( 5.5671) ( 23.8935 )
( 0.0631)
( 0.0684 )
( 0.0634 )
(i) In contrast to the model estimated in part (a) all the estimated coefficients are
significant (with the exception of the intercept). There is a 3-month lagged effect on
housing starts from an initial interest rate change, and the effect is negative as one
would expect. The implications of the signs and magnitudes of the lagged y variables
are better assessed from the consequent estimates of the lagged weights.
(ii) Using the results from Exercise 9.6 we obtain the following lag weights
Lag weight estimate
0
1
2
3
4
5
6
7
8
9
10
11
12
2
=
58.4292
4 = 1 3
5 = 1 4 + 2 3
6 = 1 5 + 2 4 + 3 3
= + +
7
1 6
2 5
30.5396
1.0023
0.1963
5.4046
3 4
8 = 1 7 + 2 6 + 3 5
9 = 1 8 + 2 7 + 3 6
= + +
10
1 9
2 8
2.6034
0.1762
0.0388
3 7
0.4986
11 = 1 10 + 2 9 + 3 8
= + +
12
1 11
2 10
0.2204
3 9
40
20
Lag Weight
(b)
-20
-40
-60
0
10 11 12
Lag
224
(b)
.00
-.05
-.10
-.15
-.20
5
10
15
20
Lag
yT +1 = + 3 xT 2 + 1 yT + 2 yT 1 + 3 yT 2
= 0.4856 58.4292 0.3 0.5227 (129) 0.2903 85 0.1641 (112)
= 44.08
Similarly the forecast of yt for February is
yT + 2 = + 3 xT 1 + 1 yT +1 + 2 yT + 3 yT 1
= 0.4856 58.4292 0.26 0.5227 44.08 0.2903 (129) 0.1641 85
= 14.24
The forecast of yt for March is
yT +3 = + 3 xT + 1 yT + 2 + 2 yT +1 + 3 yT
= 0.4856 58.4292 (0.06) 0.5227 ( 14.24) 0.2903 44.08
0.1641 ( 129)
= 19.80
(v) The forecast of housing starts, in thousands of houses, for January 2006 is
225
226
EXERCISE 9.14
(a)
Testing the null hypothesis H 0 : = 0 against the alternative H1 : 0 we obtain the test
statistic value LM = 4.383 with a corresponding p value of 0.0363. Since the p value is
less than a significance level of 0.05, we reject the null hypothesis and conclude that the
errors in this model are correlated.
(b)
There are a number of possible ARDL models that could be chosen here. Given the
relatively small number of observations, we have opted for the simplest one that
eliminates first-order autocorrelation, namely an ARDL(1,0) model. Also, the coefficients
of the extra terms included in ARDL(2,0) and ARDL(1,1) models were found to be
insignificant. If you experiment with more lags you will find that an ARDL(4,1) model
has a large number of significant coefficients. However, estimating such a large model
with such a small sample is too ambitious. The estimated ARDL(1,0) model is
(1.2062 ) ( 0.0787 )
(se)
( 0.2005)
The LM test for AR(1) errors yields a test value of LM = 0.756 with corresponding pvalue of 0.3845, indicating that the correlation found in part (a) has been eliminated by the
inclusion of ln(UNITCOSTt 1 ) .
(c)
estimate
0.1862
0.1195
0.0767
0.0492
0.0316
0.0203
These estimates suggest that, as cumulative production increases, most of the learning
effect that reduces unit cost occurs immediately, but there is also a gradually declining
learning effect that continues to reduce costs beyond the immediate period.
(d)
227
EXERCISE 9.15
(a)
With the trend specified as t = 1, 2, ,111 , The least squares estimated equation is
The positive sign for b2 and the negative sign for b3 , and their relative magnitudes,
suggest that the trend for ln(POW) is increasing at a decreasing rate. A positive b4 implies
the elasticity of power use with respect to productivity is positive. The residual
correlogram is depicted in the figure below. There is strong evidence of autocorrelation
with significant positive correlations exceeding the significance bound 1.96 111 = 0.186
up to lag 7, and some significant negative correlations less than the negative bound
1.96 111 = 0.186 beyond lag 19.
1.0
0.8
Correlation
0.6
0.4
0.2
0.0
-0.2
-0.4
2
10
12
14
16
18
20
22
24
Lag
(b)
(0.1052)
(0.0584)
One lag of POW and one lag of PRO were used because the coefficients of the longer lags
were not significantly different from zero. In addition, the residual correlogram in Figure
xr9.15b suggests that autocorrelation has been largely eliminated with only r5 , r6 and r21
statistically significant and even these values are relatively small.
228
Correlation
.1
.0
-.1
-.2
-.3
5
10
15
20
Lag
(c)
The p-values for testing the hypothesis H 0 : 4 = 1 are 0.4817 and 0.8704 for parts (a) and
(b), respectively. We do not reject H 0 in both cases. Including more lags to correct for
autocorrelation has led to a large change in the p-value, but the test decision is still the
same.
(d)
Denoting the coefficients of ln( PROt ) and ln( PROt 1 ) as 0 and 1 , respectively, and
that of ln( POWt 1 ) as 1 , the lag weights can be shown to be
0 , 10 + 1 , 1 (10 + 1 ), 12 (10 + 1 ), 13 (10 + 1 ),
) + 1 (1 + 1 + 12 + 13 +
0 + 1
1 1
0 + 1 0.98278 0.87683
=
= 0.5210
1 0.79663
1
1
229
EXERCISE 9.16
(a)
(0.4728) (0.0162)
(0.0006) (0.000005)
(0.1030)
(b)
(0.0595)
The p-value for the hypothesis test H 0 : 1 = 0 for the model is 0.0152. Since it is less than
the level of significance 0.05, we reject the null hypothesis and conclude that the dummy
variable is statistically significant.
The p-value for the hypothesis test H 0 : 1 = 0 for the model with lags is 0.2019. Since it
is greater than the level of significance 0.05, we do reject the null and therefore cannot
conclude that the dummy variable is significant.
Thus, when we do not account for autocorrelation the structural change is statistically
significant, but when we do correct for autocorrelation, the structural change is no longer
significant. In general, these results suggest that, if we do not specify the correct lag
structure, we can make misleading conclusions about the existence of structural change.
230
EXERCISE 9.17
(a)
( 0.2873) ( 0.0769 )
(se)
The correlogram of the residuals is shown below. The significance bounds are drawn at
1.96 117 = 0.181 . There are a few significant correlations at long lags (specifically at
lag orders 7, 12 and 22), but they are relatively small. The spike at lag 12 could indicate a
monthly seasonal effect.
.3
Correlations
.2
.1
.0
-.1
-.2
-.3
5
10
15
20
Lag
(b)
231
EXERCISE 9.18
(i) The estimated ARDL model is
Only one lag of xt was included because neither additional lags of xt nor lags of yt
were statistically significant, with the exception of yt 4 . Including yt 4 would have
unnecessarily complicated the model, particularly in light of the residual correlogram
given below, which shows no significant autocorrelations. The significance bounds in
this correlogram are at 1.96 215 = 0.134 .
.15
Correlations
.10
.05
.00
-.05
-.10
-.15
5
10
15
20
Lag
( 0.2549 )
( 0.0672 )
An ARDL(1,1) model was specified because additional lags of both variables were not
statistically significant, and they were also unnecessary to eliminate autocorrelation.
All correlations in the below correlogram are small, with only that at lag 16
marginally significant.
.15
.10
Correlations
(b)
.05
.00
-.05
-.10
-.15
5
10
15
Lag
20
232
(i) For (b) (i) the lag weights for 8 quarters are:
0 = 0.3685, 1 = 0.2299, 2 = 3 = 4 = 5 = 6 = 7 = 8 = 0
The lag weights beyond lag 1 are zero because there is no lagged dependent variable
in the model. The estimated lag weights suggest that a temporary 1 unit increase in xt
will cause yt to increase by 0.3685 at time t, and yt +1 to increase by 0.2299 at time
t + 1 , with no changes in y from time t + 2 onwards. In terms of the original
variables, a 1% temporary increase in the growth of disposal personal income, will
lead to an immediate increase in the growth of personal consumption of 0.3685%, and
an increase of in the following quarter of 0.2299%, with no further changes in
subsequent periods.
Using the notation
zt = + 0 xt + 1 xt 1 + 1 zt 1 + vt
the lag weights for (b) (ii) for the first 8 quarters are given by the expressions
0 = 0
= +
1
1 0
s = 1 s 1 ,
s >1
0 = 1.2379,
4 = 0.0064,
1 = 0.8264,
= 0.0013,
5
2 = 0.1632,
6 = 0.0002,
3 = 0.0322
7 = 0.0000, 8 = 0.0000
These lag weights show the changes in growth of consumption of durable goods due
to a temporary increase in income growth. They suggest that the current and onequarter lagged effects are relatively large and positive, but after that the effects are
relatively small and oscillate in sign, the oscillation being a consequence of the
negative estimate for 1 .
233
This result shows that the long run effect on y of a sustained 1-unit increase in x is
0.5984.
The total multiplier for part (b) (ii) is
s =
s =0
0 + 1 + 2 + 3 +
s = 0 + 1 + 2 + 3 +
s =0
0 + 1
1
1
1.2379 + 1.0709
1 (0.1975)
= 1.9280
It shows that the long run effect on z of a sustained 1-unit increase in x is 1.928.