How Smart Is My Dummy? Time Series Tests For The Influence of Politics
How Smart Is My Dummy? Time Series Tests For The Influence of Politics
How Smart Is My Dummy? Time Series Tests For The Influence of Politics
doi:10.1093/pan/mpi004
Tony Caporale
Department of Economics, 331 Bentley Annex,
Ohio University, Athens, OH 45701
e-mail: [email protected] (corresponding author)
Kevin Grier
Department of Economics, 335 Hester Hall,
University of Oklahoma, Norman, OK 73019
e-mail: [email protected]
Of necessity, many tests for political influence on policies or outcomes involve the use of
dummy variables. However, it is often the case that the hypothesis against which the
political dummies are tested is the null hypothesis that the intercept is otherwise constant
throughout the sample. This simple null can cause inference problems if there are
(nonpolitical) intercept shifts in the data and the political dummies are correlated with these
unmodeled shifts. Here we present a method for more rigorously testing the significance of
political dummy variables in single equation models estimated with time series data. Our
method is based on recent work on detecting multiple regime shifts by Bai and Perron. The
article illustrates the potential problem caused by an overly simple null hypothesis, exposits
the Bai and Perron model, gives a proposed methodology for testing the significance of
political dummy variables, and illustrates the method with two examples.
Before the curse of statistics fell upon mankind we lived a happy, innocent life
—Hilaire Belloc, On Statistics
1 Introduction
The interaction between politics and economic policy making is an important and
fascinating study. Much has been learned from the many papers using political variables to
help predict the time path of economic variables (and vice versa). However, there is a large
potential statistical problem in much of the literature that is seldom discussed, namely the
frequent absence of a well-defined null hypothesis to the posited pattern of political
influence. Of necessity, political information often enters time series models as a group of
dummy variables. Often the null hypothesis is simply that the chosen political dummies
are insignificant and the alternative is that politics ‘‘matters.’’ Note that the null hypothesis
implicitly embraces a constant intercept throughout the sample.
Authors’ note: An earlier version of this article was presented at the 2004 American Political Science Association
(APSA) meetings. The authors wish to thank John Londregan, George Krause, and three anonymous referees for
their helpful comments, criticisms, and suggestions.
Political Analysis, Vol. 13 No. 1, Ó Society for Political Methodology 2005; all rights reserved.
77
78 Tony Caporale and Kevin Grier
However, there may be significant intercept shifts in the sample that are not caused by the
political factors under investigation. In this case, if the political events are correlated with the
excluded true break points, they will tend to be statistically significant even though they are,
in reality, not. In some sense this is just a simple omitted variable story, but in another sense it
is a potential challenge to any set of results concluding that politics matters by comparing the
statistical significance of political change dummies to that of a fixed intercept.
This is an important issue in political science research, as many political phenomena
such as partisanship, divided government, autocracy, or coalition governments are
represented empirically by dummy variables. If, in the regression under study, the dummy
variable is significant, the researcher generally concludes that the political phenomenon
represented is statistically important. The test is generally very stark: either the intercept
shifts at the chosen point (or points) or it does not shift at all. However, life is probably not
that simple. There may be other factors that in reality shift the intercept while the political
factors inherently do not. But if we omit the other factors and include the political ones, we
may well uncover a spuriously significant effect as the coefficient on the political variable
will be biased away from zero.
Deriving the exact bias that arises when the true intercept shift is excluded in a model
with multiple regressors is complicated, but the simple case of one regressor is
straightforward. Suppose, for example, that the true model is given by a constant and
a particular intercept shift represented by the dummy variable D*, as shown in Eq. (1):
Yt ¼ ao þ b1 ½Dt þ i ð1Þ
But instead, the researcher estimates an incorrect model consisting of an intercept and
an incorrect intercept shift represented by the dummy variable DP as shown in Eq. (2):
Yt ¼ ao þ b1 ½DPt þ i ð2Þ
The null hypothesis embodied in Eq. (2), that b1 equals zero, implies that the null model is
a fixed intercept with no other shift allowed. Under this incorrect null, the coefficient b1 in Eq.
(2), whose true value is zero, will have a nonzero expected value that is given by Eq. (3):
Equation (3) shows that the size and sign of the estimated coefficient will depend on the
true coefficient on the correct variable, the correlation between the spurious and correct
variables, and the relative volatilities of the variables.
If the researcher had instead tested the significance of dummy variable DP in a model
that included D* [i.e., had estimated Eq. (4)], the expected value of the
Shift is a dummy variable that raises the mean for 40 observations (i.e. 10 years) in the
middle of the sample and the e(t)’s are independent draws from a normal distribution with
mean zero and variance 0.5. The series is displayed in Fig. 1.
How Smart is My Dummy? 79
Fig. 1 An artificial series with an intercept shift Y(t) ¼ 6.5 þ 2*Shift(t) þ e(t) [Shift¼1 from
77.1 - 86.4].
Figure 2 illustrates our point. Suppose we have a political shift that is represented by the
dummy variable shown in the figure. This spurious dummy begins three years earlier and
ends four years later than the true shift. It is especially hard to see how this dummy could
be said to cause the observed regime shift, since any argument involving lagged effects or
anticipated effects may work at one end but not at the other. Nevertheless, under the
incorrect null of no other possible shifts, this spurious dummy will have a large and
significant coefficient (its linear correlation with the true shift is 0.67).
To further illustrate, we use the dummy shown in Fig. 2 along with four other dummy
variables that are correlated to different degrees with the true intercept shift. These
correlations range from 0.33 to 0.73 as reported in column one of Table 1. Columns 2 and
3 of Table 1 show the size and significance of these spurious dummy variables when they
are estimated via ordinary least squares (OLS) under the incorrect null of no other possible
shift, while column 4 shows the size and significance of the same dummy variables when
estimated using a simple correction for first-order autocorrelation in the errors under the
incorrect null. Finally, column 5 of the table shows the size and significance of these
spurious dummies under the correct null, which is the null that allows for the true intercept
shift. The results for the particular dummy graphed in Fig. 2 appear in the fourth row of
Table 1.
We can see that even when correcting the standard errors for general heteroskedasticity
and autocorrelation in the errors, these five spurious dummies are positive and significant
at the 0.05 level or better, with the size of their coefficients and t statistics being positive
functions of their correlation with the true, excluded dummy. When estimated with
a correction for first-order serial correlation, the last three dummies (those most highly
correlated with the true dummy) are positive and significant at the 0.01 level, the second
dummy is positive and significant at the 0.10 level, and the first dummy is not statistically
significant. Finally, as seen in column 5, when the true dummy is included in the
regression (i.e., the correct null hypothesis is imposed), the spurious dummies all have
much smaller and insignificant coefficients.
The foregoing discussion is just a simple illustration of our overall point: in order to
have confidence in the reported significance of political dummy variables, there needs to
be a realistic null hypothesis under consideration when significance tests are performed.
In the rest of this article we outline a method to provide a more stringent null hypothesis
from which to test the importance of politics. The first step is to determine the number,
location, and confidence intervals for intercept shifts in the sample via time series
techniques. In particular, we employ the methods recently developed by Bai and Perron
(1998, 2000, 2003).
Given these statistically optimal break points, the significance of political dummy
variables can then be considered in two separate stages. If the political dummies imply the
same number of intercept shifts that are found by the time series methods and these
political shifts fall inside the confidence intervals of the time series shifts, then there is an
extremely strong case for the argument that politics fundamentally matters. If some of the
politically derived shifts match up with time series shifts, the case is weakened but
potentially sustainable.
If none of the political shift points line up with the time series break dates, then it is difficult
to make the case that political factors cause major movements in the variable under study.
How Smart is My Dummy? 81
However, they may still matter in the following sense. One can take the time series break
points as given and test to see whether the political dummies have any incremental
explanatory power. That is to say, even though politics may not be causing large changes in
the behavior of the variable, they do cause statistically significant changes, even when the
larger changes are taken into account. Even this second type of demonstration would be
a much more compelling argument in favor of the importance of politics than the common
practice of simply testing political dummies against an otherwise fixed intercept.
In what follows we present a method for statistically determining the optimal break
points based on the work of Bai and Perron, describe our suggested methodology for
testing the significance of political dummy variables in more detail, and then apply this
method to an investigation of the determinants of shifts in U.S. and U.K. monetary policy
as measured by shifts in the real interest rate.
1
The endogenous determination of break points has been an important research topic in empirical
macroeconomics since Perron’s (1989) demonstration that standard Dickey-Fuller unit root tests are sensitive
to allowing for intercept and/or trend shifts (see Zivot and Andrews 1992 and Perron 1997).
82 Tony Caporale and Kevin Grier
models. This test, which BP would call SupF (2/1), also requires its critical values to be
determined experimentally. This process can continue up to the largest number of breaks
the researcher is willing to consider and will produce an estimate of both the number of
structural breaks and their locations in time.
Now we will more formally present the Bai and Perron (1998) framework using its most
simple (intercept shift only) version. However, it is important to note that the procedure is in
no way limited to this simple case. There can be other variables in the model, and their
coefficients can either be subject to shifts or remain constant over the sample depending on
the choice of the investigator. We present this simple case for ease of exposition using the
following multiple linear regression with m break points (mþ 1 regimes):
yt ¼ b1 Zt1 þ Et ; t ¼ 1; 2; . . . ; T1
yt ¼ b2 Zt2 þ Et ; t ¼ T1 þ 1; . . . ; T2
: ð6Þ
:
yt ¼ bj Ztmþ1 þ Et ; t ¼ Tmþ1 ; . . . ; T
This provides us with b ^j(T1. . .TM), which are the estimates associated with the m
partition (T1. . .TM). Substituting them into the objective function (7) yields the estimated
break points
where St (T1. . .TM) is the sum of squared residuals. Using the estimated break points
^j(T^1,. . .T^M).
(T^1,. . .T^M) the parameter estimates found are b
Bai and Perron first suggest a SupFt(L) F statistic to test the null of no structural breaks
(L ¼ 0) versus the alternative hypothesis that there are m ¼ k breaks. This tests b1 ¼
b2 ¼ . . .bmþ1. The procedure searches all possible break dates and minimizes the difference
between the restricted and unrestricted sum of squares over all the potential breaks.3
2
The Bai-Perron (1998) procedure corrects for serial correlation and different variances across segments by
incorporating Andrews (1991) robust standard errors. All break models employed in this paper utilize this
correction.
3
The break values can depend on the imposition of the minimal length of a segment (h). This is determined by the
value of the trimming parameter (E) that must be specified to estimate the model. Since E ¼ h/T, a lower value of
E implies a smaller minimum regime size. Bai and Perron (1998) provide recommendations for E based on
sample size and the maximum number of break points allowed.
How Smart is My Dummy? 83
Bai and Perron (1998, 2003) next propose two tests of the null hypothesis of no breaks
against at least 1 through M breaks. These are called double maximum tests. The first,
a Udmax statistic, is the maximum value of the SupFt(L), where L is an upper bound on
the possible number of breaks. The second, a Wdmax test, weights the individual statistics
such that the marginal p values are equal across values of m. This implies weights that
depend on the significance levels of the tests. The null hypothesis of both tests is no
structural breaks against an unknown number of breaks given some specific upper bound
on the possible number of break points.
If the null of no break is rejected by the double maximum tests, Bai and Perron next
suggest a sequential SupFt(Lþ1/L) procedure to determine the number of structural
breaks. The statistic tests the null of L breaks against the alternative of Lþ1 breaks.
Rejection in favor of a model with Lþ1 breaks occurs if the overall minimum value of the
sum of squared residuals is sufficiently smaller than the sum of squared residuals from the
one-break model. The number of break dates selected is the number associated with the
overall minimum error sum of squares.4
Finally, estimates of the break dates need not be the global minimizers of the sum of
squared residuals. A sequential procedure can also be used to select the number of breaks
in which, if an initial break is found [based on the initial SupFt(1) test], the sample is then
divided into subgroups at the break point, and the same parameter constancy test is then
performed on the subsamples. The partitioning of the subsamples continues until the
parameter constancy test fails to reject the null. Bai and Perron (1998, 2003) are able to
develop a method to compute confidence intervals for the sequential break points by
employing a novel asymptotic theory that assumes the magnitudes of the breaks decline as
the sample size increases.
A useful check on the number of breaks found using the sequential method is supplied
by Bai’s (1997) repartition estimation procedure. Starting with T-consistent estimates of ki
from the sequential procedure (k^i,i ¼ 1, 2.), k01 is reestimated using the subsample [1, k^2]
and k02 is reestimated using [k^1, T] Cases in which these estimators (call them k^1 and k^2 )
reveal the same number and location of breakdates provides us with additional confidence
in our results based on the sequential procedure.5
In sum, the BP methods can be used to test for the existence of structural breaks, along
with the number and location of those breaks. Confidence intervals on the break dates can
also be constructed. We argue that this method imposes a needed reality check on the true
significance of political dummy variables that are often included in regressions where the
intercept is otherwise constant.
3 A Proposed Methodology for Evaluating the Significance of Political Dummy
Variables in Time Series Equations
In this section, we outline the steps involved for testing the significance of political dummy
variables in time series equations using the BP methods described above. Put briefly, the
proposed methodology goes as follows:
(1) Choose the maximum number of break points allowed and the error structure to be
assumed in the tests;
4
Critical values for these tests are found in Bai and Perron (1998, 2003).
5
Two final methods, the Bayesian information criteria (BIC) and Schwartz’s criteria (LWZ), have also been
proposed as additional ways of determining break dates. However, Bai and Perron (2003), using Monte Carlo
experiments, show that the sequential procedure works better than these alternatives, and thus we do not explore
them here.
84 Tony Caporale and Kevin Grier
(2) Estimate the number, location and confidence intervals for the structural breaks
using the BP methods;
(3) Check to see if the political dummy variables fall within the confidence intervals
for the empirically optimal breaks. If so, declare victory as the political variables
are closely associated with major shifts in the series. If not:
(4) Check to see if, given the empirically optimal break points, the political dummies
still have incremental explanatory power via non-nested hypothesis tests.
An important preliminary step is to determine whether the variables under consideration
are stationary.6 Bai and Perron (2000, p. 10) explain that their methodology ‘‘precludes
intergrated variables (with an auto-regressive unit root) but permits trending regressors.’’
This issue is even less straightforward than usual here because the possibility of regime
shifts makes standard unit root tests potentially biased. In the interests of continuity and
brevity, we relegate a full discussion of this issue to the appendix and proceed to elucidate
the four-point process outlined above.
(1) Given a resolution to the stationarity question, we next turn to specifying the model.
An initial trimming percentage (e) must be specified in order to ensure a reasonable
amount of degrees of freedom to calculate an initial error sum of squares. For example, if
e ¼ .15, then breaks will be considered for the middle 70% of the sample. Bai and Perron’s
Gauss break point program allows for e ¼ .05, .10, .15, .20, .25.
The trimming specification determines the maximum possible number of breaks as well
as the minimum regime size. For instance, when e ¼ .10 the maximum number of breaks is
eight, since allowing nine breaks forces the break dates to be exactly at 10% intervals (.1,
.2,. . .9). When e ¼ .15 the maximum (M) number of breaks is 5; M ¼ 3 for e ¼ .20 and
M ¼ 2 for .25. Therefore, for a series with sample ¼ 100 quarters and e ¼ .15, there is
a maximum of five breaks (six regimes) where each regime has a minimum length of
15 quarters.
Next, assumed error structure inside each segment and across segments must be
specified and modeled. Bai and Perron consider the following possibilities:
A. No serial correlation and constant variances of the errors within and between
segments.
B. No serial correlation and different variances of the errors in between segments.
C. Serial correlation and constant error variances within and across segments.
D. Serial correlation in the errors and nonconstant error variances within and between
segments.
For case A, the critical F values, point estimates as well as confidence intervals of the
break points are generated using classical OLS assumptions. Cases B and D require the
researcher to use the Andrews (1991) HAC standard error option to estimate the model as
well as choose an option (for case D) that allows the variance of the residuals to be
different across all segments. Case C requires a prewhitening of the data prior to estimation
of the structural break model.
As a practical matter, Bai and Perron have created free GAUSS code that implements
any of these cases. They recommend choosing the most general error structure (case D),
6
Variables that have a unit root (or stochastic trend) have population parameters (mean or variance) that are time
dependent. Since it is well known that correlations between independent nonstationary time series can often be
spurious, variables must be rendered stationary prior to being used for parameter estimation and hypothesis
testing.
How Smart is My Dummy? 85
since consistent estimates of the break points are still assured even if the corrections are
implemented on a case A dataset.
(2) Bai and Perron (1998, 2003) argue that the global and sequential SupFt(Lþ1/L)
tests provide the most reliable estimates of the number and location of the break dates.
Clearly the strongest evidence would involve both methods yielding the same answers. In
cases in which they disagree, Bai and Perron (1998) suggest that the global procedure
should be used for any model with more than one significant break.7
Beyond determining the number and location of the breaks, 90, 95, and 99% confidence
intervals for the break dates are provided. The derivations of these values are explained in
Bai and Perron (2000, pp. 11–13) and rest on the use of a novel asymptotic framework in
which the magnitudes of the shifts converge to zero as the sample size increases.
(3) Given these confidence intervals, we can examine how well the proposed political
dummy variables (which are intercept shifts) correlate with the break points estimated via
BP methods. If the number of regimes implied by the political model is equal to the
number implied by the time series model, and each political break point falls inside the
confidence interval of a BP break, that will be very strong evidence in favor of the primacy
of the political effects. They are responsible for all the major break points in the sample. If
there is a partial match between political breaks and BP breaks, the importance of politics
can still be argued, but the matter becomes open to interpretation.
(4) Even if the political breaks are not closely related to the BP regime shifts, they may
still be statistically significant and important variables. One can allow for this possibility
by testing whether, taking the time series shift points as given, the political dummy
variables have any incremental explanatory power.
Perhaps the most straightforward way to accomplish this is via Davidson and
McKinnon’s J-test methodology for non-nested hypothesis testing. This allows us to test
the hypothesis that model X rejects (or dominates) model Y by including the fitted value of
model X as an additional variable in the model Y regression. If model X’s predicted values
are insignificant, the validity of model Y is not rejected. Alternatively, if model X’s
predicted value is significant, we conclude that model Y is rejected by model X. The same
procedure can then be used to test model X against Y. The strongest evidence concerning
the superiority of a model (say X) would be that it dominates model Y and fails to be
dominated by model Y. In our case, we would be testing whether the fitted values from
a political intercept shift model had any incremental explanatory power in the BP intercept
shifting model.8 The following section presents two real-world applications of our
proposed method.
7
A divergence can occur in the case in which the SupFt(1) test is insignificant but the SupFt(2) test rejects zero
breaks in favor of 2. The sequential procedure will then stop at zero breaks whereas the global SupFt(2/1) test
may suggest a two-break model.
8
One could also consider testing this hypothesis by including both the empirical break point dummies and the
political break point dummies in the regression of interest and using an F test of the hypothesis that the political
dummies all had coefficients of zero.
9
A few examples include Hibbs (1977), Beck (1982), Alesina and Sachs (1988), Hakes (1990), Grier (1991,
1996), and Krause (1994).
86 Tony Caporale and Kevin Grier
course, there are many (all imperfect) ways to measure monetary policy. Given the decline
in correlation between monetary aggregates and economic outcomes, a consensus has
emerged in favor of using a short-term interest rate as the policy measure. Yet nominal
interest rates can be misleading policy indicators without controlling for inflation. That is
to say, an 8% interest rate in a zero inflation environment is indicative of restrictive policy,
but that same rate in a period of 10% inflation is not at all restrictive.10
Caporale and Grier (2000) use a methodology similar to the one exposited above and
show that, in the U.S. case, large shifts in the real interest rate are closely related to
changes in party control of either the executive or one of the legislative branches of the
federal government, and not at all related to changes in the chairmanship of the Federal
Reserve.
10
For a further discussion, see Caporale and Grier (1998).
11
Note that while these regression are extremely parsimonious, the standard errors of the coefficients are estimated
using the Newey-West formula, making them consistent in the face of arbitrary types of autocorrelation and
heteroskedasticity.
How Smart is My Dummy? 87
Now we consider finding the optimal number and location of break points using the BP
methods outlined above. As shown in Table 4, the two general tests for the presence of
structural breaks, the UDmax and Wdmax tests, are both significant at the 0.01 level. We
consider finding the optimal number of breaks by using both the SupF (Lþ1 j L) tests and
the sequential procedure. In both cases, four breaks are chosen with dates of 1967.2,
1973.1, 1980.3, and 1986.2.12 These break points are fairly tightly estimated; the 95%
confidence intervals are 1966.1–1969.2, 1971.4–1973.2, 1980.1–1981.1, and 1985.1–
1987.3. These results are reported in Table 2, and column 3 of Table 3 shows that the BP
dates account for around 58% of the variation in the real rate.
As can be seen by comparing the political and bureaucratic change dates in Table 2
with the confidence intervals for the empirically estimated break dates in Table 4, the BP
estimated breaks capture the Democrat to Republican presidential change in 1969.1, the
Democrat to Republican presidential and Senate change in 1981.1, and the Republican to
Democrat Senate change in 1987.1. The Republican to Democrat presidential change in
1977.1 does not correlate with a structural shift, nor do the Republican to Democrat
presidential change in 1993.1 or the Democrat to Republican House and Senate change in
1995.1.13
By contrast, each change of the chairmanship of the Federal Reserve occurred outside
the confidence intervals for the structural breaks. This is a striking result, as it shows that
despite popular belief, large changes in monetary policy are not largely determined by
changes in Federal Reserve leadership.
Given the coincidence of the BP break points and political changes, we believe that
there is a significant effect of large political changes on the real interest rate. In this
subsection, we go on to consider whether, given the null hypothesis of break points only at
the dates uncovered by the Bai and Perron procedures, there is any additional evidence of
political influence on the U.S. real interest rate. That is to say, we assume the BP break
points are not political and test to see if political changes achieve statistical significance
taking the BP dates as given. We accomplish this by means of non-nested hypothesis
tests.14
Columns 1 through 3 of Table 3 report OLS regressions of the BP break dates, the party
change break dates, and the Federal Reserve chair change break dates on the real interest
rate. In column 4 the predicted values of the political change regression are used as an
additional regressor in the BP break date regression. As can be seen, the resulting
12
These are the exact same dates reported in Caporale and Grier (2000). Also note that for both our U.S. and U.K.
real rate results, the repartition method produced the identical number, location, and confidence intervals for the
break dates as the sequential procedure.
13
However, these last two omissions may not be surprising because they happen very close to each other and
intuitively we might expect the two changes to offset each other.
14
Our test can be recognized as one-half of the so-called J-test methodology of Davidson and Mackinnon (1981).
88 Tony Caporale and Kevin Grier
Structural Structural
Political Federal Reserve Structural Break þ Political Break þ Federal
Model Chair Model Break Model Fit Reserve Fit
Constant 1.64 (9.10) 1.55 (8.08) 1.88 (11.62) 1.36 (5.14) 1.94 (6.58)
Clinton 1.29 (2.46)
Reagan-Bush 0.60 (1.44)
Carter 2.69 (4.10)
Nixon-Ford 1.90 (3.32)
Rep Congress 2.61 (5.67)
Burns 2.06 (3.72)
Miller 3.65 (6.77)
Volcker 2.38 (3.01)
Greenspan 0.65 (1.82)
BP Regime 2 1.03 (4.99) 0.61 (1.95) 1.07 (4.06)
BP Regime 3 2.45 (6.60) 2.16 (4.93) 2.49 (6.17)
BP Regime 4 6.84 (15.06) 5.29 (6.79) 7.03 (8.96)
BP Regime 5 3.04 (72.9) 2.42 (6.97) 3.10 (7.18)
Pol fit 0.31 (2.51)
Fed fit 0.04 (0.27)
Adjusted R2 .47 .38 .58 .59 .58
Note. All regressions are estimated using the Newey-West HAC corrected standard errors with lag truncation ¼ 4.
T statistics in parentheses.
coefficient is positive and significant at the 0.05 level, indicating that even with this
stringent null hypothesis, party change is a significant determinant of the real interest rate.
By contrast, the predicted values from the Federal Reserve chair regression are completely
insignificant when added as an ancillary regressor to the BP break date regression, as
shown in column 5.
In sum, then, the evolution of the U.S. real interest rate is significantly influenced by
changes in party control of branches of the federal government, but not by changes in the
chairmanship of the Federal Reserve. The structural break points in the series are
reasonably closely correlated with political changes, and additional significant information
is carried in the political change dates beyond the information in the optimal break points.
Neither of these results obtain for the Federal Reserve chair change dates.
Sup Ft(1) Sup Ft(2) Sup Ft(3) Sup Ft(4) Sup Ft(5)
26.26* 29.43* 39.12* 37.26* 31.49*
Sup Ft(2/1) Sup Ft(3/2) Sup Ft(4/3) Sup Ft(5/4)
37.35* 37.35* 23.62* 1.57
Udmax Wdmax (10%) Wdmax (5%) Wdmax (1%)
39.12* 63.89** 69.11** 78.83*
Number of breaks selected
Sequential procedure 4
Repartition procedure 4
Break point dates and 95% confidence interval
T^1 67.2 (66.1–69.2)
T^2 73.1 (71.4–73.2)
T^3 80.3 (80.1–81.1)
T^4 86.2 (85.1–87.3)
*p , .01, **p , .05, ***p , .10
again consider finding the optimal number of breaks by using both the SupF (Lþ1 j L)
tests and the sequential procedure. In both cases, three breaks are chosen, with dates of
1970.4, 1980.3, and 1993.3. The 1980.3 break point has a tightly estimated confidence
interval of 1980.1–1982.4. The other two breaks are not as tightly estimated. Their
confidence intervals are 1964.2–1971.2 and 1991.2–1999.1.15
We can see that the first U.K. break point corresponds with the change in government
from Liberal to Conservative in 1970.2, and the last U.K. break point corresponds with the
change in government from Conservative to Liberal in 1997.1, though this is due mainly to
the imprecise estimate of the BP break date. Interestingly, the famous Liberal to
Conservative government switch in 1979.2 is close to but outside of the 95% confidence
interval for the second U.K. break point by three quarters. The Conservative to Liberal
government change in 1964.4 is not related to any BP break date.
Turning to changes in the director of the Bank of England, the change from Cromer to
O’Brien in 1966.3 falls in the confidence interval for the first BP break point, and the
change from Leigh to George in 1993.3 coincides exactly with the second BP break point.
The other two changes in the director of the Bank of England (O’Brien to Richardson in
1973.3 and Richardson to Leigh in 1983.3) are unrelated to any BP break dates.
The U.K. case is less clear cut than the U.S. case. There are five political regime shifts, two
of which are in BP shift confidence intervals, but in neither of these cases is the political shift
‘‘close’’ to the exact BP break point. Further, the most famous political shift in the sample,
from Callahan to Thatcher, is not related to any BP shift. There are four Bank of England
director shifts in the data; one of them coincides directly with a BP shift point and another falls
inside a BP shift confidence interval but again is not very close to the estimated break point.
15
Note that one of these breaks corresponds exactly to a U.S. break, namely the one at 1980.3. The other two
estimated U.K. break dates do not fall inside the confidence intervals for the U.S. breaks. Thus there is
a significant amount of independent variation in the U.K. series, and we are probably justified in treating it as
a separate case. Using some relatively strong assumptions (relative purchasing power parity, the international
Fisher effect, and uncovered interest parity), one can derive a real interest parity condition for international
interest rates. This condition implies that single country real rates do not move independently of the ‘‘world’’
real rate. Real interest parity is seldom found in the data.
90 Tony Caporale and Kevin Grier
Neither set of changes relates to the real rate as well as does political change in the United
States, nor does one type of change dominate the other as was the case in the United States.
We now consider whether, taking the BP break points as given, either party government
change or Bank of England director change has any additional, independent explanatory
power for the U.K. real rate. Columns 4 and 5 of Table 6 show that neither the predictions
of the party change model nor the Bank of England change model have any significant
explanatory power above and beyond the BP break points. That is to say, the political and
bureaucratic variables that are significant when tested against the null hypothesis of an
otherwise fixed intercept are insignificant when tested against a null that includes the
statistically optimal intercept shifts.
4.3 Discussion
Of the four sets of dummy variables studied (political change U.S., political change U.K.,
Federal Reserve chair change U.S., and Bank of England director change U.K.) all are
significant against the null of a fixed intercept. None of these sets of dummy variables
match up precisely to the statistically optimal intercept shifts uncovered by the BP
methods used here, though the political change U.S. dummies are reasonably close.
Further, only the political change U.S. dummies are significant when tested against
a null that includes the BP optimal break points. These examples illustrate both the use of
the BP methodology and the problems that can arise when testing for the influence of
dummy variables against an overly simple null hypothesis.
Table 6 Alternative mean shifting models of U.K. real interest rates 1961.1–1999.4
Structural Structural
Structural Break þ Break þ
Political Central Bank Break Political Central
Model Chair Model Model Fit Bank Fit
Constant 1.03 (2.29) 1.07 (3.21) 1.63 (4.4) 1.32 (2.21) 1.48 (3.64)
Blair 2.51 (3.91)
Thacker-Major 3.04 (3.96)
Wilson-Callaghan 5.67 (2.85)
Heath 2.64 (3.91)
Wilson 1.18 (1.92)
George 2.19 (5.02)
Leigh 4.29 (8.06)
Richardson 2.68 (1.56)
O’Brien 0.55 (0.51)
Regime 2 5.28 (4.42) 4.44 (4.20) 4.92 (4.32)
Regime 3 8.88 (7.35) 7.53 (4.62) 7.97 (6.42)
Regime 4 2.13 (3.92) 2.09 (3.73) 2.06 (3.88)
PM fit 0.20 (0.64)
Bank fit 0.19 (1.26)
Adjusted R2 .32 .24 .41 .41 .41
Note. All regressions are estimated using the Newey-West HAC corrected standard errors with lag truncation ¼ 4.
t statistics in parentheses.
These examples also allow us to point out some of the limitations of the BP
methodology. Since the BP procedure can only detect statistical break points in the data, it
cannot tell us why the shift is occurring (nor does it purport to). Clearly no empirical
technique is a substitute for a well-specified theory.
To further illustrate the importance of theory, note that our empirical hypotheses in this
article are tested with important auxiliary assumptions, the most relevant being that
political or bureaucratic regime changes should have a contemporaneous effect on the
variable under study. This allows us to argue that the Thatcher regime switch in 79.2 did
not have a significant effect on U.K. real interest rates, since the break point in the series
that takes place in 80.3 has a confidence interval that only reaches back to 80.1. Certainly
a model that allows for a delayed effect of the policy change may well argue the opposite.
If a researcher has a well-specified theory that includes lead or lag effects of policy
changes, there is nothing that precludes that theory from being tested using BP techniques.
Clearly for some economic variables (like real output), longer lags may be easily justified in
picking up the effects of the policy regime changes, while for others (like interest rates),
a longer lag structure would be harder to justify. The danger with overreliance on (especially
overly loose) lead or lagged effects is that it can lead to any result ‘‘confirming’’ a theory.
Our tests also assume that the real interest rate is constant inside each regime and
subject to infrequent regime shifts. This may seem like a stark model, but it has been
employed several times in the literature.16 Other theories may imply a range of slope
coefficients that shift over time or an intercept that shifts along with slope coefficients that
do not. The BP methodology will accomodate these permutations, but the substantive
16
For example, Garcia and Perron (1996), Caporale and Grier (2000), Bai and Perron (2003).
92 Tony Caporale and Kevin Grier
answer obtained clearly can depend on the specification of the model. That is, if the reader
is not happy with the form of the regression we used, that is an issue to take up with us and
not with the BP method.
5 Conclusion
In this article we proposed using the statistical methods of Bai and Perron to create more
stringent tests for the influence of political dummy variables on time series data. We show
that the correlation between central bankers and monetary policy is more apparent than real in
the United States and the United Kingdom and that political change is an important deter-
minant of monetary policy (as measured by short-term real interest rates) in the United States.
Our examples involve time series models of monetary policy with intercept shifts only.
However, the technique applies more broadly. The methods can be used to uncover the
number and location of break points in slope coefficients in a model as well as just
intercept shifts. We are currently working on extending our tests for political influence on
monetary policy in this direction.
Finally, to close as we began, both the problem described and the method presented
here have applications far beyond the study of monetary policy, since political dummy
variables are widely used in empirical research in the social sciences on topics like
government spending, taxation, tariff rates, exchange rates, and probably many others.
where the null hypothesis that the series has a unit root (d ¼ 0) is tested against the
alternative of stationarity (d , 0). The number of lags in the ADF regression must be
determined by some criterion. A common choice is to use the number of lags that
minimize the value of the Akaike information criteria (AIC).
How Smart is My Dummy? 93
The ADF test can also be modified by allowing for an alternative hypothesis in which
the series contains a deterministic trend. The appropriate ADF regression becomes
and the computed ADF statistic is the OLS t-statistic testing d ¼ 0 against d , 0.
However, for present purposes, standard ADF tests are problematic. Zivot and Andrews
(1992) explain that these tests can be misleading (fail to reject a false null) if the time series
undergo discrete structural changes, which is exactly the situation we are studying. They
demonstrate that adding level and/or trend shifts to ADF equations lowers the chances of
falsely concluding that a break or trend break stationary series is nonstationary.17
Their procedure involves altering the standard ADF test by estimating
where (k) indicates the potential break point in the time series and DUt denotes a level
shift that equals one at and after the break point and zero before. The unit root test is then
the t statistic evaluating the null of d ¼ 0 (nonstationary) against d , 0 (break stationary)
using new critical values supplied by Zivot and Andrews. They propose estimating the
model for every possible (k) and choose the one that minimizes the t statistic on the
coefficient d.18
References
Alesina, Alberto, and Jeffrey Sachs. 1988. ‘‘Political Parties and the Business Cycle in the United States.’’
Journal of Money Credit and Banking 20:63–82.
Andrews, Donald W. K. 1991. ‘‘Heteroskedasticity and Autocorrelation Consistent Covariance Matrix
Estimation.’’ Econometrica 59(3):817–858.
Andrews, Donald W. K. 1993. ‘‘Tests for Parameter Instability and Structural Change with Unknown Change
Point.’’ Econometrica 61(4):821–856.
Bai, Jushan. 1997. ‘‘Estimating Multiple Breaks One at a Time.’’ Econometric Theory 13:315–352.
Bai, Jushan, and Pierre Perron. 1998. ‘‘Estimating and Testing Linear Models with Multiple Structural Changes.’’
Econometrica 66:47–78.
Bai, Jushan, and Pierre Perron. 2000. ‘‘Multiple Structural Changes: A Simulation Analysis.’’ Boston University
Working Paper.
Bai, Jushan, and Pierre Perron. 2003. ‘‘Computation and Analysis of Multiple Structural Change Models.’’
Journal of Applied Econometrics 18:1–22.
Beck, Nathaniel. 1982. ‘‘Presidential Influence on the Federal Reserve.’’ American Journal of Political Science
26:415–445.
Brown, R. L., J. Durbin, and J. M. Evans. 1975. ‘‘Techniques for Testing the Constancy of Regression
Relationships over Time.’’ Journal of the Royal Statistical Society, Series B 37:149–192.
Caporale, Tony, and Kevin Grier. 1998. ‘‘A Political Model of Monetary Policy with Application to the Real Fed
Funds Rate.’’ Journal of Law and Economics 41:409–428.
Caporale, Tony, and Kevin B. Grier. 2000. ‘‘Political Regime Change and the Real Interest Rate.’’ Journal of
Money, Credit, and Banking 32:320–334.
Chow, Gregory. 1960. ‘‘Testing the Equality between Sets of Coefficients in Two Linear Regressions.’’
Econometrica 28:591–605.
17
Both of the real interest rates that we investigate in this paper were overwhelmingly found to be break point
stationary using the ZA procedure. This finding is consistent with many recent studies on real rates (see Garcia
and Perron 1996 and Caporale and Grier 2000).
18
The test can also be performed by interacting the trend with the break point dummy and by allowing both an
intercept and a trend shift. Zivot and Andrews (1992) provide critical values for these alterative tests as well.
94 Tony Caporale and Kevin Grier
Davidson, Russell, and James G. MacKinnon. 1981. ‘‘Several Tests for Model Specification in the Presence of
Alternative Hypotheses.’’ Econometrica 49:781–793.
Garcia, Rene, and Pierre Perron. 1996. ‘‘An Analysis of the Real Interest Rate under Regime Shifts.’’ Review of
Economics and Statistics 79:327–337.
Grier, Kevin. 1991. ‘‘Congressional Influence on U.S. Monetary Policy: An Empirical Test.’’ Journal of
Monetary Economics, October:201–220.
Grier, Kevin. 1996. ‘‘Congressional Influence on U.S. Monetary Policy Revisited.’’ Journal of Monetary
Economics, December:571–580.
Hakes, David. 1990. ‘‘The Objectives and Priorities of Monetary Policy under Different Federal Reserve
Chairmen.’’ Journal of Money, Credit and Banking 22:327–337.
Hibbs, Douglas. 1977. ‘‘Political Parties and Macroeconomic Policy.’’ American Political Science Review
71:1467–1487.
Krause, George. 1994. ‘‘Federal Reserve Policy Decision Making: Political and Bureaucratic Influence.’’
American Journal of Political Science 38:124–44.
Perron, Pierre. 1989. ‘‘The Great Crash, the Oil Price Shock and the Unit Root Hypothesis.’’ Econometrica
57:1361–1401.
Perron, Pierre. 1997. ‘‘Further Evidence from Breaking Trend Functions in Macroeconomic Variables.’’ Journal
of Econometrics 80:355–385.
Zivot, Eric, and Donald W. K. Andrews. 1992. ‘‘Further Evidence on the Great Crash, the Oil-Price Stock, and
the Unit-Root Hypothesis.’’ Journal of Business and Economic Statistics 10(3):251–270.