Brookings
Brookings
Brookings
We thank Gadi Barlevy, Jerey Campbell, Stefania DAmico, Alan Greenspan, Alejandro Justiniano, Leonardo Melosi,
Taisuke Nakata, Valerie Ramey, David Romer, Paulo Surico, Franois Velde, Johannes Wieland, and Justin Wolfers
forhelpful comments, and Theodore Bogusz, David Kelley and Trevor Serrao for superb research assistance. We
also thank Michael McMahon for providing us with machine-readable FOMC minutes and transcripts. The views
expressed herein are those of the authors and do not necessarily represent the views of the Federal Open Market
Committee or the Federal Reserve System.
Jonas Fisher
Francois Gourio
Spencer Krane
March 9, 2015
Abstract
As projections have inflation heading back toward target and the labor market continuing to improve, the Federal Reserve has begun to contemplate an increase in the
federal funds rate. There is however substantial uncertainty around these projections.
How should this uncertainty affect monetary policy? In standard models uncertainty
has no effect. In this paper, we demonstrate that the zero lower bound on nominal
interest rates implies that the central bank should adopt a looser policy when there
is uncertainty. In the current context this result implies that a delayed liftoff is optimal. We first demonstrate theoretically this result using two canonical macroeconomic
models. On the one hand, raising rates early might lead to excessively weak growth
and inflation if the economic fundamentals turns out weaker than expected. On the
other hand, raising rates later might lead to inflation if economic fundamentals are
stronger than expected. Near the zero lower bound, monetary policy tools are strongly
asymmetric and can deal with the second scenario much more easily than with the
first. We next provide a quantitative evaluation of this policy using numerical simulations calibrated to the current environment. Finally, we present narratives from
Federal Reserve communications that suggest risk management is a longstanding practise, and econometric evidence that the Federal Reserve historically has responded to
uncertainty, as measured by a variety of indicators.
All the authors are affiliated with the Federal Reserve Bank of Chicago. We thank Gadi Barlevy, Jeffrey
Campbell, Stefania DAmico, Alan Greenspan, Alejandro Justiniano, Leonardo Melosi, Taisuke Nakata,
Valerie Ramey, David Romer, Paulo Surico, Francois Velde, Johannes Wieland, and Justin Wolfers for
helpful comments, and Theodore Bogusz, David Kelley and Trevor Serrao for superb research assistance.
We also thank Michael McMahon for providing us with machine-readable FOMC minutes and transcripts.
The views expressed herein are those of the authors and do not necessarily represent the views of the Federal
Open Market Committee or the Federal Reserve System.
Introduction
To what extent should uncertainty affect monetary policy? This classical question is relevant
today as the Fed considers when to start increasing the federal funds rate. In the December 2014 Summary of Economic Projections (SEP), most Federal Open Market Committee
(FOMC) participants forecast that the unemployment rate would return to its long-run neutral level by late 2015 and that inflation would gradually rise back to its 2 percent target.
This forecast could go wrong in two ways. One is that the FOMC may be overestimating the
underlying strength in the real economy or in the factors that will return inflation to target.
Guarding against these risks calls for cautious removal of accommodation. The second is
that we could be poised for a much stronger rise in inflation than currently projected. This
risk calls for more aggressive rate hikes. How should policy manage these divergent risks?
In our view, the biggest risk we face today is prematurely engineering restrictive monetary
conditions. If the FOMC misjudges the impediments to growth and inflation and reduces
monetary accommodation too soon, it could find itself in the very uncomfortable position
of falling back into the ZLB environment. The implications of the ZLB for the attainment
of the FOMCs policy goals are severe. It is true the FOMC has access to unconventional
policy tools at the ZLB, but these appear to be imperfect substitutes for the traditional
funds rate instrument. Furthermore, there is no guarantee that they will be as successfully
as they have in the past if monetary policy tightened and then the economy were to soon
return to the ZLB, in part because the credibility that supported the alternative tools prior
efficacy could be substantially diminished by an unduly hasty exit from the ZLB.
In contrast, it is reasonable to imagine that the costs of inflation running moderately
above target for a while are much smaller than the costs of falling back into the ZLB.
This is not the least because it is likely that inflation could be brought back into check with
modest increases in interest rates. These measured rate increases likely would be manageable
for the real economy, particularly if industry and labor markets had already overcome the
headwinds that have kept productive resources from being efficiently and fully employed. In
addition, inflation in the U.S. has averaged well under that 2 percent mark for the past six
1
and a half years. With a symmetric inflation target, one could imagine moderately-abovetarget inflation for a limited period of time as simply the flip side of the recent inflation
experienceand hardly an event that would impose great costs on the economy.
To summarize, raising rates too early increases the likelihood of adverse shocks driving
the economy back to the ZLB, while delaying lift-off too long against a more robust economy
could lead to an unwelcome bout of high inflation. Since the tools available to counter the first
scenario may be less effective than the traditional tool of raising rates to counter the second
scenario, the costs of premature lift-off exceed those of delay. It therefore seems prudent
to refrain from raising rates until we are highly certain that the economy has achieved
a sustained period of strong growth and that inflation is on clear trajectory to return to
target.1
In this paper we establish theoretically that in the current setting uncertainty about
monetary policy being constrained by the ZLB in the future implies an optimal policy of
delayed lift-offthe risk management framework just described. We formally define risk management as the principle that policy should be formulated taking into account the dispersion
of shocks around their means. Our main theoretical contribution is to provide a simple
demonstration that within the canonical framework used to study optimal monetary policy
under discretion, the ZLB implies a new role for such risk management through two distinct
economic channels.
The first channel - which we call the expectations channel arises because the possibility
of a binding ZLB tomorrow leads to lower expected inflation and output today, and hence
requires some counteracting policy easing today. The second channel which we call the
buffer stock channel arises because it can be useful to build up output or inflation today
in order to reduce the likelihood and severity of hitting the ZLB tomorrow. We show that
optimal policy when one or both of these channels are operative dictates that lift-off from a
zero interest rate should be delayed at times when a return to the ZLB remains a distinct
possibility. These channels operate in very standard macroeconomic models, so no leap of
1
Evans (2014)s speech at the Petersen Institute of Economics discusses these issues at greater length.
output gap.
Our theoretical analysis is predicated on the simplifying assumption that the only policy
instrument available to the central bank is control over short term interest rates. If the
monetary policy toolkit contained alternative instruments that were perfect substitutes for
changing the short-term policy rate, then the zero ZLB would not present any special economic risk and our analysis would be moot. However, even though most central bankers
believe unconventional policies such as large scale asset purchases (LSAPs) or forward guidance about policy rates can provide considerable accommodation at the ZLB, no one argues
that these tools are on an equal footing with traditional policy instruments.2 The remainder
of this introduction discusses why we think non-conventional policies are unlikely to be good
substitutes for interest rate policies.
One reason for this is that effects of unconventional policies on economic activity and inflation naturally are much more uncertain than the transmission mechanism from traditional
tools. Various studies of LSAPs, for example, have generated a wide range of estimates of
their ability to put downward pressure on private borrowing rates and influence the real
economy. Furthermore, the effects of both LSAPs and forward guidance about the future
path of the federal funds rate are complicated functions of private sector expectations, which
by very nature makes their economic effects highly uncertain.3
Alternative monetary policy tools also carry potential costs that are somewhat different
from those associated with standard policy. The four most commonly cited costs are: the
large increases in reserves generated by LSAPs risk unleashing inflation; a large balance
sheet may make it more difficult for the Fed to raise interest rates when the time comes;
the extended period of low interest rates and Fed intervention in the long-term Treasury
2
For example, while there is econometric evidence that changes in term premia influence activity and
inflation, the effects appear to be less powerful than comparably sized movements in the short term policy
rate, see DAmico and King (2015), Kiley (2012) and ?.
3
Bomfin and Meyer (2010), DAmico and King (2013) and Gagnon, Raskin, Remache, and Sack (2010)
find noticeable effects of LSAPs on Treasury term permia while ? and Hamilton and Jing Wu (Hamilton
and Jing Wu) unearth only small effects. Krishnamurthy and Vissing-Jorgensen (2013) argue that the
LSAPs have only had a substantial influence on private borrowing rates in the mortgage market. Engen,
Laubach, and Reifschneider (2015) and Campbell, Evans, Fisher, and Justiniano (2012) provide analysis of
the dependence of LSAPs and forward guidance on private sector expectations.
and MBS markets may induce inefficient allocation of credit and financial fragility; and the
large balance sheet puts the Fed at risk of incurring financial losses if rates rise too quickly
and such losses could undermine Federal Reserve support and independence.4 For the most
part these costs appear to be very hard to quantify, and so naturally elevate the level of
uncertainty associated with ZLB policies.
A consequence of all of the uncertainty over benefits and costs is that unconventional
tools are likely to be used more cautiously than traditional policy instruments. For example
Bernanke (2005) emphasizes that because of the uncertain costs and benefits of them . . .
the hurdle for using unconventional policies should be higher than for traditional policies.
Furthermore, at least conceptually, some of the benefits of ZLB policies may be decreasing,
and the costs increasing, in the size of the balance sheet or in the amount of time spent in
a very low interest rate environment.5 Accordingly, policies that had wide-spread support
early on in a ZLB episode might be difficult to extend or expand with an already large
balance sheet or with smaller shortfalls policy targets.
So, while valuable, alternative policies also appear to be less-then-perfect substitutes for
changes in short term policy rates. Accordingly, the ZLB presents a different set of risks to
policymakers than those that they face during more conventional times and thus it is worthy
of consideration on its own accord. We abstract from unconventional policy tools for the
remainder of our analysis.
The canonical framework of monetary policy analysis assumes that the central bank sets
the nominal interest rate to minimize a quadratic loss function of the deviation of inflation
4
These costs are mitigated, however, by additional tools the Fed has introduced to enhance control over
interest rates when the time comes to exit the ZLB and by enhanced supervisory and regulatory efforts to
monitor and address potential financial stability concerns. Furthermore, continued low rates of inflation and
contained private-sector inflationary expectations have reduced concerns regarding an outbreak of inflation.
5
Krishnamurthy and Vissing-Jorgensen (2013) argue successive LSAP programs have had a diminishing
influence on term premia. Surveys conducted by Blue Chip and the Federal Reserve Bank of New York
also indicate that market participants are less optimistic that further asset purchases would provide much
stimulus if the Fed was forced to expand their use in light of unexpected economic weakness.
from its target and the output gap, and that the economy is described by a set of linear
equations. This framework allows the optimal interest to be calculated as a function of the
underlying shocks or economic fundamentals. In most applications, uncertainty is incorporated as additive shocks to these linear equations capturing factors outside the model that
lead to variation in economic activity or inflation.6
A limitation of this approach is that, by construction, it denies that a policymaker might
choose to adjust policy in the face of changes in uncertainty about economic fundamentals.
However, the evidence discussed below in Section 3 suggests that in practice, policymakers
are sensitive to uncertainty and respond by following what appears to be a risk management
approach. Motivating why a central banker should behave in this way requires some departure from the canonical framework. The main contribution of this section is to consider a
departure associated with the possibility of a binding ZLB in the future.
We show that when a policymaker might be constrained by the ZLB in the future,
optimal policy today should take account of uncertainty about fundamentals. We focus
on two distinct channels through which this can occur. To keep the analysis transparent
we study these channels using two closely related but different models. We first use the
workhorse forward-looking New Keynesian model to illustrate the expectations channel, in
which the possibility of a binding ZLB tomorrow leads today to lower expected inflation
and output gap, thus necessitating policy easing today. We then use a backward-looking
Old Keynesian set-up to illustrate the buffer stock channel, in which it can be optimal to
build up output or inflation today in order to reduce the likelihood and severity of being
constrained by the ZLB tomorrow.7 After describing these two channels we study some
numerical simulations to assess their quantitative effects.
6
This framework can be derived from a micro-founded DSGE model (see for instance Woodford (2003),
Chapter 6), but it has a longer history and is used even in models that are not fully micro-founded. For
instance, the Federal Reserve Board staff routinely conducts optimal policy exercises in the FRB-US model,
see for example English, L
opez-Salido, and Tetlow (2013)
7
Both of these channels operate in modern DSGE models such as Christiano, Eichenbaum, and Evans
(2005) and Smets and Wouters (2007).
2.1
The simple New Keynesian model has well established micro-foundations based on price
stickiness. Given that there are many excellent expositions of these foundations, e.g. Woodford (2003) or Gali (2008), we just state our notation without much explanation. The model
consists of two main equations, the Phillips curve and the IS curve.
The Phillips curve is specified as
t = xt + Et t+1 + ut .
(1)
In (1) t and xt are both endogenous variables and denote inflation and the output gap at date
t. Et is the date t conditional expectations operator; rational expectations is assumed. The
variable ut is a mean zero exogenous cost-push shock, 0 < < 1, and > 0. For simplicity
we assume the central bank has a constant inflation target equal to zero so t is the deviation
of inflation from that target.8 The cost-push shock represents exogenous changes to inflation
such as an independent decline in inflation expectations, dollar appreciation or changes in
oil prices.
The IS curve is specified as
xt = Et xt+1
1
(it Et t+1 nt ) ,
(2)
where > 0, it is the nominal interest rate controlled by the central bank, and nt is the
natural rate of interest given by
nt = + gt + Et (zt+1 zt ).
(3)
In this specification of the natural rate gt is a mean zero demand shock, and zt is the
exogenous log of potential output. With constant potential output and demand shock equal
8
The assumption of a constant inflation target distinguishes our analysis from those of Coibion, Gorodnichenko, and Wieland (2012) and Eggertsson and Pugsley (2009) who investigate how changes in the central
banks inflation target affects outcomes near the ZLB.
1 X t 2
L = E0
t + x2t ,
2
t=0
(4)
Woodford (2003, p. 248) defines the natural rate as the equilibrium real rate of return in the case of
fully flexible prices. As discussed by Barsky, Justiniano, and Melosi (2014), in medium-scale DSGE models
with many shocks the appropriate definition of the natural rate is less clear.
10
Uncertainty itself could give rise to gt shocks. A large amount of recent work, following Bloom (2009),
suggests that private agents react to increases in economic uncertainty, leading to a decline in economic
activity. One channel is that higher uncertainty may lead to precautionary savings which depresses demand,
as in emphasized by Basu and Bundick (2013), Fernandex-Villaverde, Guerro-Quintana, Kuester, and RubioRamrez (2012) and Born and Pfeifer (2014).
known from the contributions of Krugman (1998), Egertsson and Woodford (2003), Woodford (2012) and Werning (2012) that commitment can reduce the severity of the ZLB problem
by creating higher expectations of inflation and the output gap. One implication of these
studies is that the central bank should commit to keeping the policy rate at zero longer than
would be prescribed by discretionary policy. By studying optimal policy under discretion we
find a different rationale for a policy of keeping rates lower for longer that does not rely
on the central bank having the ability to commit to a time-inconsistent policy.11 Second, we
think this approach better approximates the institutional environment in which the FOMC
operates.
2.1.1
A ZLB Scenario
We study optimal policy when the central bank is faced with the following simple ZLB
scenario. The central bank observes the current value of the natural rate, n0 , and the costpush shock u0 ; moreover, there is no uncertainty in the natural rate after t = 2, nt = > 0
for all t 2, nor in the cost push shock after t = 1, ut = 0 for all t 1. However, there
is uncertainty at t = 1 regarding the natural rate n1 . The variable n1 is assumed to be
distributed according to the probability density function f ().12
This very simple scenario keeps the optimal policy calculation tractable while preserving
the main insights. We also think it captures some key elements of uncertainty faced by the
FOMC today. Since (1) and (2) do not contain endogenous state variables we do not have
to take a stand on whether the ZLB is binding before t = 0. One possibility is that the
natural rate nt is negative for t < 0 and that the policy rate is set at zero, it = 0 for t < 0,
but by t = 0 the economy is close to being unconstrained by the ZLB. However, there is
uncertainty as to when this will happen. The natural rate might be low enough at t = 1
11
Implicitly we are assuming the central bank does not have the ability to employ what Campbell et al.
(2012) call Odyssean forward guidance. However our model is consistent with the central bank using
forward guidance in the Delphic sense described Campbell et al. (2012) because agents anticipate how the
central bank reacts to evolving economic conditions.
12
There is ample evidence of considerable uncertainty regarding the natural rate. See for example Barsky
et al. (2014), Hamilton, Harris, Hatzius, and West (2015) and Laubach and Williams (2003).
such that the ZLB still binds, or the economy may recover enough so that the natural rate
is positive. This allows us to consider the optimal timing of lift-off.
2.1.2
Analysis
To find the optimal policy, we solve the model backwards from t = 2 and focus on the
policy choice at t = 0. First, for t 2, it is possible to perfectly stabilize the economy by
setting the nominal interest rate equal to the (now positive) natural rate, it = nt = . This
leads to t = xt = 0 for t 2.13 The optimal policy at t = 1 will depend on the realized
value of the natural rate n1 . If n1 0, then it is again possible (and optimal) to perfectly
stabilize by setting i1 = n1 , leading to x1 = 1 = 0. However if n1 < 0, the ZLB binds and
consequently x1 = n1 / < 0 and 1 = n1 / < 0. The expected output gap at t = 1 is hence
R0
E0 x1 = f ()d/ 0 and expected inflation is E0 1 = E0 x1 < 0.
Because agents are forward-looking, this low expected output gap and inflation feed
backward to t = 0. A low output gap tomorrow depresses output today by a wealth effect
via the IS curve and depresses inflation today through the Phillips curve. Low inflation
tomorrow depresses inflation since price setting is forward looking in the Phillips curve and
depresses output today by raising the real interest rate via the IS curve.14 The optimal
policy at t = 0 must take into account these effects. This implies that optimal policy will be
looser than if there was no chance that the ZLB binds tomorrow.
Substituting for 0 and i0 using (1) and (2), and taking into account the ZLB constraint,
optimal policy at t = 0 solves the following problem:
min
x0
1
1
(x0 + E0 1 + u0 )2 + x20 s.t. x0 E0 x1 + (n0 + E0 1 ) .
2
13
We note that this simple interest rate rule implements the equilibrium t = xt = 0, but is also consistent
with other equilibria. However there are standard ways to rule out these other equilibria. See for instance
Gali (2008, pp. 7677) for a discussion. From now on, we will not mention this issue.
14
See Johannsen (2014) and Nakata (2013b) who describe this mechanism in economies with an interest
rate feedback rule. Note that this mechanism is distinct from the precautionary savings motives at the ZLB
discussed in footnote ??.
10
Two cases arise, depending on whether the ZLB binds at t = 0 or not. Define the threshold
value
0
Z 0
=
f ()d.
u0 1 + +
+ 2
+ 2
If n0 > 0 , then the optimal policy is to follow the standard monetary policy response to an
inflation shock to the Phillips curve, E0 1 + u0 leading to:
x0 =
(E0 1 + u0 ) ;
+ 2
0 =
(E0 1 + u0 ) .
+ 2
2
n
= 0 +
u0 + 1 + +
f ()d.
+ 2
+ 2
(5)
R0
As long as f ()d < 0, (5) implies that the optimal interest rate is lower than if there
was no chance of a binding ZLB tomorrow, i.e. the case of f () = 0 for 0. The interest
rate is lower today to offset the deflationary and recessionary effects of the possibility of a
binding ZLB tomorrow. If n0 < 0 , then the ZLB binds today and optimal policy is i0 = 0.15
Notice from (5) that for some parameters, the ZLB will bind at t = 0 even though it would
not bind if agents were certain that the economy would perform well afterwards. Specifically,
if agents were certain that the ZLB would not bind at t = 1, E0 x1 = E0 1 = 0 and i0 = 0
if n0 u0 /( + 2 ) < 0 . So the possibility of the ZLB binding tomorrow increases the
chances of being constrained by the ZLB today.
Turning specifically to the issue of uncertainty, we obtain the following unambiguous
result:
R0
Since E0 x1 is a sufficient statistic for f ()d in (5), the optimal policy has the flavor of a traditional
forward-looking policy reaction function. However E0 x1 is not independent of a mean preserving spread in
the distribution of n1 so optimal policy here departs from the certainty equivalence principle which says that
the extent of uncertainty in the underlying fundamentals does not affect the optimal interest rate. Recent
statements of the certainty equivalence principle in models with forward-looking variables can be found in
Svensson and Woodford (2002, 2003).
15
11
R0
concave, higher uncertainty through a mean-preserving spread about n1 leads to lower, i.e.
more negative, E0 x1 and E0 1 , and hence lower i0 .16
Another interesting feature of the solution is that the distribution of the positive values
of n1 is irrelevant for policy. That is, policy is set only with respect to the states of world in
which the ZLB might bind tomorrow. The logic is that if a very high value of n1 is realized,
monetary policy can adjust to it and prevent a bout of inflation. This is a consequence of
the standard principle that, outside the ZLB, demand shocks can and should be perfectly
offset by monetary policy.
2.1.3
Discussion
Proposition 1 has several predecessors; perhaps the closest are Adam and Billi (2007), Nakata
(2013a,b) and Nakov (2008) who demonstrate numerically how, in a stochastic environment,
the ZLB leads the central bank to adopt a looser policy. Our contribution is to provide
a simple analytical example.17 This result has been correctly interpreted to mean that if
negative shocks to the natural rate lead the economy to be close to the ZLB, the optimal
response is to reduce the interest rate aggressively to reduce the likelihood that the ZLB
becomes binding. The same logic applies to liftoff. Following an episode where the ZLB has
been a binding constraint, the central bank should not raise rates as if the ZLB constraint
were gone forever. Even though the best forecast may be that the economy will recover and
exit the ZLB i.e., in the context of the model, that E(n1 ) > 0 it can be optimal to have
zero interest rates today. Note that policy is looser when the probability of being constrained
by the ZLB in the future is high or the potential severity of the ZLB problem is large, i.e.
16
See Mas-Colell, Whinston, and Green (1995, Proposition 6.D.2, pp. 199) for the relevant result regarding
the effect of a mean preserving spread on the expected value of concave functions of a random variable.
17
See also Nakata and Schmidt (2014) for a related analytical result in a model with two-state Markov
shocks.
12
R0
f ()d is a large negative number; the economy is less sensitive to interest rates (high
In the January 2015 Federal Reserve Bank of New York survey of Primary Dealers, respondents put the
odds of returning to the ZLB within two years following liftoff at 20%.
13
2.2
The buffer stock channel does not rely on forward-looking behavior, but rather on the view
that the economy has some inherent momentum, e.g. due to adaptive inflation expectations,
inflation indexation, habit persistence, adjustment costs and hysteresis. Suppose that output
or inflation have a tendency to persist. If there is a risk that the ZLB binds tomorrow,
building up output and inflation today creates some buffer against hitting the ZLB tomorrow.
This intuition does not guarantee that it is optimal to increase output or inflation today.
In particular, the benefit of higher inflation or output today in the event that a ZLB event
arises tomorrow must of course be weighed against the costs of excess output and inflation
today, and tomorrows cost to bring down the output gap or inflation if the economy turns
out not to hit the ZLB constraint. So it is important to verify that our intuition holds up
in a model.
To isolate the buffer stock channel from the expectations channel we focus on a purely
backward-looking Old Keynesian model. Purely backward-looking models do not have
micro-foundations like the New Keynesian model does, but backward-looking elements appear to be important empirically.19 Backward-looking models have been studied extensively in the literature, including by Laubach and Williams (2003), Orphanides and Williams
(2002), Reifschneider and Williams (2000) and Rudebusch and Svensson (1999).
The model we study replaces (1) and (2) from the forward-looking model with
t = t1 + xt + ut ;
1
xt = xt1 (it nt t1 ) ,
(6)
(7)
where 0 < < 1 and 0 < < 1. This model is essentially the same as the one studied
by Reifschneider and Williams (2000). Unlike in the New Keynesian model it is difficult to
19
Indeed empirical studies based on medium-scale DSGE models, such as those considered by Christiano
et al. (2005) and Smets and Wouters (2007), find backward-looking elements are essential to account for the
empirical dynamics. Backward-looking terms are important in single-equation estimation as well. See for
example Fuhrer (2000), Gali and Gertler (1999) and Eichenbaum and Fisher (2007).
14
2.2.1
Analysis
As before, we solve the model backwards from t = 2 to determine the optimal policy choice
t = 0 and how this is affected by uncertainty in the natural rate at t = 1. After t = 1 the
economy does not experience any more shocks so nt = for t 2, but it inherits initial
lagged inflation and output terms 1 and x1 , which may be positive or negative. The output
gap term can be easily adjusted by changing the interest rate it , provided the central bank
is not constrained by the ZLB at t = 2, i.e. if n2 = is large enough, an assumption we
will maintain.20 Given the quadratic loss, it is optimal to smooth this adjustment over time,
so the economy will converge back to its steady-state slowly. The details of this adjustment
process after t = 2 are not very important for our analysis. What is important is that the
overall loss of starting from t = 2 with a lagged inflation 1 and output gap x1 turns out to
be a quadratic function of 1 only; we can write it as W 12 /2, where W is a constant that
depends on , , and and is calculated in the appendix.
Turn now to optimal policy at t = 1. Take the realization of n1 and last periods output
gap x0 and inflation 0 as given. Substituting for 1 and i1 using (6) and (7), and taking
into account the ZLB constraint, optimal policy at t = 1 solves the following problem:
V (x0 , 0 , n1 ) = min
x1
1
W
0 + n1
(0 + x1 )2 + x21 + 12 s.t. x1 x0 +
.
2
2
where the policymaker now anticipates the cost of having inflation 1 tomorrow, and her
20
15
=
(1 + W )
+ 1 0 x0 .
(1 + W )2 +
(8)
For n1 1 (x0 , 0 ) the ZLB is not binding, otherwise it is. Hence the probability of hitting
the ZLB is
1 (x0 ,0 )
f ()d.
P (x0 , 0 ) =
In contrast to the forward-looking case where the probability of being constrained by the
ZLB constraint is exogenous, it is now endogenous at t = 1 and can be influenced by policy
at t = 0. As indicated by (8), a higher output gap or inflation at t = 0 will reduce the
likelihood of hitting the ZLB at t = 1.
If n1 1 (x0 , 0 ) optimal policy at t = 1 yields
x1 =
(1 + W )
0 ;
(1 + W )2 +
1 =
0 .
(1 + W )2 +
This is similar to the forward-looking model solution that reflects the trade-off between
output and inflation, except that optimal policy now takes into account the cost of having
inflation away from target tomorrow, through W . The loss for this case is V (x0 , 0 , n1 ) =
W 02 /2 since in this case the problem is the same as the one faced at t = 2. If n1 < 1 (x0 , 0 )
the ZLB binds, in which case
0 + n1
x1 = x0 +
;
1 = x0 + 0
n
+ 1.
+
(9)
The expected loss from t = 1 on as a function of the output gap and inflation at t = 0 is
then given by:
W 2
L(x0 , 0 ) =
2 0
f ()d +
1 (x0 ,0 )
16
1 (x0 ,0 )
2
1 + W
2
0 +
x0 + 0 +
+
x0 +
f ()d.
+
2
This expressions reveals that the initial conditions x0 and 0 matter by shifting (i) the payoff
from continuation in the non-ZLB states, W 02 /2, (ii) the payoff in the case where the ZLB
binds (the second integral), and (iii) the relative likelihood of ZLB and non-ZLB states
through 1 (x0 , 0 ). Since the loss function is continuous in (even at 1 (x0 , 0 )), this last
effect is irrelevant for welfare at the margin.
The last step is to find the optimal policy at time 0, taking into account the effect on the
expected loss tomorrow:
min
x0
1
n + 1
.
(1 + x0 + u0 )2 + x20 + L(x0 , 0 ) s.t. x0 x1 + 0
2
2.2.2
Discussion
As far as we know Proposition 2 is a new result, but its implications are similar to those
of Proposition 1. As in the forward-looking case lift-off from an optimal zero interest rate
should be delayed today with an increase in uncertainty about the natural rate or cost-push
shock that raises the odds of the ZLB binding tomorrow. Similarly, from an interest rate
higher than the ZLB, an increase in uncertainty about the natural rate or cost-push shocks
that raises uncertainty about the likelihood of being constrained by the ZLB tomorrow leads
to a faster drop in the policy rate today. So the buffer stock channel and the expectations
17
channel have very similar policy implications but for very different reasons. The expectations
channel involves the possibility of being constrained by the ZLB tomorrow feeding backward
to looser policy today. The buffer stock channel has looser policy today feeding forward to
reduce the likelihood and severity of being at the ZLB tomorrow.
It is useful to compare the policy implications of the buffer stock channel to the argument
developed in Coibion et al. (2012). That paper studies the ZLB in the context of policy
reaction functions rather than optimal policy. It finds that an increase in the central banks
inflation target can reduce the likelihood and severity of policy being constrained by the ZLB.
Our analysis does not require such a drastic change in monetary policy in order to improve
outcomes; this is accomplished through standard interest rate policy. It is not necessary
for the central bank to resort to a change to its inflation target which could damage its
hard-earned credibility.21
2.3
Quantitative Assessment
The previous sections demonstrates how higher uncertainty about future conditions justifies
a looser policy. In the current environment this implies delayed liftoff. To assess the quantitative magnitudes of these effects, and to illustrate some qualitative features of the solution,
this section constructs some more realistic examples, which are solved numerically. Using
parameters drawn from the literature and an estimate of the conditions prevailing today, we
compare the outcomes from optimal policy (under discretion) with the well-known Taylor
rule as specified in Taylor (1993).
2.3.1
Parameter values
The parameters underlying our quantitative analysis are reported in Table 1. The time
period is one quarter and we take t = 1 to be 2015q1. The natural real rate of interest nt
21
Coibion et al. (2012) do their analysis within the context of a medium-scale DSGE model with both
forward and backward looking elements. It remains to be seen how important the buffer stock channel is for
outcomes in a medium-scale DSGE setting.
18
T0
n0
x0
0
Description
Discount factor
Slope of Phillips Curve
Inverse elasticity of substitution
Backward-looking IS curve coef.
Backward-looking Phillips curve coef.
Std. dev. natural rate innovation
Std. dev. of cost-push innovation
Serial correlation of natural rate
Serial correlation of cost-push
Weight on output stabilization
Steady-state inflation (annualized)
Terminal natural rate (annualized)
Quarters to reach terminal natural rate
Value of natural rate at time 1
Taylor rule coefficient on inflation
Taylor rule coefficient on output gap
Initial condition for the output gap
Initial condition for inflation
Value
0.995
0.02
2
0.75
0.95
0.3
0.15
0.92
0.3
0.25
2
1.75
16
-0.5
1.5
0.5
-1.5
1.3
Note: Values of standard deviations, inflation and the natural rate are
shown in percentage points.
Both models are subject to natural rate and cost-push shocks that are independent AR(1)
processes with auto-correlation coefficients rho and u and innovation standard deviations
and u . We use parameter values that are similar to the literature with the cost-push
shock less persistent and less volatile than the natural rate shock.22
Our calculations assume that there is a date T > T0 such that, after time T , the natural
22
See for instance Adam and Billi (2007) for a calibration of a similar model, and Laubach and Williams
(2003) for estimates of the volatility of the natural rate.
19
rate is constant equal to , and there are no further shocks. This allows us to calculate the
equilibrium fairly easily by backward induction under different policy rules. The precise value
of T does not matter for our results, provided it is large enough. Details of the computational
methods used for each model are available in the online appendix. Importantly, and in
contrast to most of the literature, our numerical methods allow for uncertainty to affect
policy, and to be reflected in welfare.23
We use the same values for the parameters that are common to the two models. The
Phillips curve slope is 0.02, the elasticity of substitution is 1/ = 0.5 and the discount
factor is = 0.995, all standard settings in the New Keynesian literature. Based on The
coefficient on lagged inflation in the backward-looking model is = 0.95, consistent with a
simple direct estimation. The coefficient on lagged output in the backward-looking IS curve
is = 0.75, in order to generate significant persistence in the output gap. The Taylor rule
has weight = 1.5 on inflation, weight = 0.5 on the output gap, and constant term equal
to 3.75% (the correct long-run natural real rate 1.75% plus the 2% inflation target.)
For the backward-looking model we also need values for the coefficients on the lagged
terms in (7) and (6) as well as initial conditions for the output gap and inflation. The coefficient on lagged inflation, = 0.95, is based on a simple instrumental variables estimation of
(6). The coefficient on lagged output in the backward-looking models IS curve is = 0.75,
in order to generate significant persistence in the output gap. We assume an initial inflation
rate of 1.3%, the latest reading for core PCE inflation as we write this paper, and an initial
output gap x0 = 1.5%. There is obviously substantial uncertainty about the level of the
output gap, but a simple or naive calculation using the 2014q4 5.7% for the unemployment
rate, an estimate of the natural rate of unemployment rate (5.0%) and Okuns law yields
-1.5%. The CBO reports the output gap to be 2% as of this writing.
23
For instance the FRB/US model used at the Board for policy simulations cannot address uncertainty
systematically. Hamilton et al. (2015) use this model to assess the implications of assuming different values
of the natural rate within their wide range of estimates. However by using FRB/US they are unable to
address the impact of uncertainty per se on optimal policy.
20
2.3.2
Figure 1 displays the path of interest rates, inflation and the output gap under two policies:
optimal policy under discretion, and the Taylor rule. These are the baseline paths, i.e.
those that arise if the realized shocks to the natural rate and to the cost-push variable are
identically zero, though optimal policy is set as if they were uncertain. The interest rate
panel also displays the nominal natural rate for comparison. As discussed in the theory
section, if there was no uncertainty, setting the interest rate equal to this nominal natural
rate would implement the optimal outcome, zero output gap and inflation gaps (x = 0 and
= ). Hence, the difference between the path for the nominal natural rate and optimal
policy demonstrates the extent to which uncertainty affects optimal policy.
Figure 1: Lift-off in the Forward-looking Model
Nominal Interest Rate
Inflation
2
1.5
1
2
0.5
1
-0.5
0
10
15
20
Quarters
10
15
20
Quarters
Output Gap
0.5
0
-0.5
Natural Rate
Optimal Discretion
Taylor Rule
-1
-1.5
-2
-2.5
0
10
15
20
Quarters
In this forward-looking model, the initial output and inflation gaps are negative, and
converge back to zero over time. The initial gaps are not initial conditions but are endogenous
21
to uncertainty and to the policy response. They arise because agents anticipate the possibility
of future negative shocks that the central bank may be powerless to offset due to the ZLB.
As the natural rate rises, this risk diminishes and optimal policy lifts off, and eventually
converges to the natural rate. In this simulation, lift off occurs in period 6, i.e. 2016q2.
The delayed lift-off relative to the path of the natural rate is entirely the consequence of the
uncertainty faced by the central bank. By comparison, the Taylor rule, which does not take
into account this uncertainty, implies faster liftoff, in the third period (2015q3). This faster
liftoff (as well as expectations of reactions to possible shocks) leads to lower expected output
and inflation that translate into lower output and inflation today.24
Table 2 compares some outcomes under these two policies, taking into account the uncertainty by simulating 50,000 paths of the model with different shocks realizations. Clearly,
optimal policy under discretion achieves a much lower loss, because it recognizes the risk
of negative shocks driving the economy back to the ZLB.25 Time-to-liftoff can be a difficult
statistic to interpret, however, because a reaction function that depends on output and inflation might have a late liftoff because it is too restrictive and thus generates low output
and inflation. For this reason, we find it useful to report the typical conditions under which
liftoff occurs, i.e. the median (across simulations) of inflation and output gap at liftoff. We
find that the Taylor rule typically lifts off when the output gap is -1.5% and inflation is 1%.
But optimal policy lifts off when the output gap is close to zero and inflation is 1.5%.
When comparing policies it is also important to balance the risks associated with each.
The loss function is a relevant summary statistic, but we find it helpful to complement it with
simpler summary statistics. We report for each policy the median (across simulations) of
the maximum inflation over the next 5 years; and similarly the median (across simulations)
of the lowest output gap over the next 5 years. Optimal policy reduces the risk of very
low output gaps substantially, with the median minimum going from -4.5% to -1.7%. But,
24
Indeed, the output gap and inflation at t = 1 are lower than what we observe currently. This might
reflect the effect of some shocks at t = 1 that we do not model, or might simply reflect that the Fed is not
following the Taylor rule.
25
This result does not have to hold by definition. The Taylor rule provides commitment which may lead
to more favorable outcomes, for instance if cost-push shocks are volatile and persistent.
22
Optimal Policy
0.12
6
0.01
1.47
3.57
-1.74
Taylor Rule
0.76
3
-1.48
1.05
4.48
-4.47
perhaps surprisingly, optimal policy also reduces the risk of a very high inflation, and reduces
the typical maximum inflation from 4.5% to 3.6%. This is because optimal policy responds
quite strongly to shocks that create inflation.
2.3.3
Figure 2 is the analogue of Figure 1 for the backward-looking model. As in the case of the
forward-looking model, we see that optimal policy under discretion iss significantly looser
than the Taylor rule, which lifts off right away. To illustrate that higher uncertainty leads to
delayed liftoff, we also solve the optimal policy when the standard deviation of shocks is 50%
larger. Liftoff is even more delayed under these circumstances (even though our simulation
assumes that no shocks are actually realized).
Optimal policy thus achieves a faster return to low gaps, and builds up a buffer during
this transition so as to guard against bad shock realizations. The output gap and inflation
optimally overshoot their targets during the transition. Table 3 is analogous to Table 2. In
the backward-looking model well, optimal policy significantly outperforms the Taylor rule
according to the expected loss. Median liftoff occurs in 2016q2, but is state-contingent of
course, and typically occurs with inflation close to target (1.9%) and the output gap positive
(0.2%). By comparison, the Taylor rule lifts off with inflation and the output gap well below
target.26
26
Indeed, the Taylor rule lifts off even earlier than these statistics suggest, since we start our computation
at time 1 and it has already lifted off.
23
Inflation
2.5
2
2
1.5
1
10
Quarters
15
20
10
Quarters
15
20
Output Gap
1
0.5
Optimal Discretion
High Uncertainty
Taylor Rule
0
0.5
1
1.5
10
Quarters
15
20
Optimal Policy
0.30
6
0.24
1.90
3.24
-1.03
Taylor Rule
0.75
1
-1.27
1.23
2.93
-1.27
One difference with the previous model is that the Taylor rule does slightly better at
reducing peak inflation, with the typical maximum inflation reduced from 3.2% under optimal
policy to 2.9%. As in the previous case however, it is worst at preventing bad outcomes,
with the minimum output gap going from -1.0% to -1.3%. Overall, the backward-looking
model, despite its very different structure, generates many of the same outcomes as the
forward-looking model.
24
1.4
1.2
4
%
0.8
3
0.6
2
0.4
0.2
0
Optimal Discretion
Taylor Rule
10
Quarters
15
20
Output Gap
10
Quarters
15
20
15
20
Inflation
0.5
4
3.5
3
0.5
2.5
2
1
1.5
1.5
10
15
20
10
Quarters
We conclude by illustrating one of the risks the optimal policy is able to address, namely
the possibility that a shock will drive up inflation before the baseline lift-off. Figure 3 depicts
a particular simulation where there are two consecutive large positive cost-push shocks that
hit the economy before the baseline date of lift-off. The shocks trigger a large increase in
interest rates, enough so that the inflation response is only mildly larger than when the
economy follows the Taylor rule. Similarly, if a large positive shock to the natural rate were
to hit the economy, higher interest rates would follow, and in this case would stabilize both
inflation and output. The simple logic is that staying at zero longer under the baseline
scenario does not impair the ability of monetary policy to respond to future contingencies.27
27
One potential criticism of the optimal policy is that it may entail large movements in interest rates.
Our policy calculation does not give any weight to interest-rate smoothing. It is difficult to rationalize
interest-rate smoothing however; indeed some authors argue that the smoothness of interest rates in the data
is due to learning rather than a desire to smooth interest rates per se (Sack (2000) and Rudebusch (2001)).
25
The FOMCs historical policy record provides many examples of how risk management considerations likely have influenced monetary policy decisions. FOMC minutes and other
Federal Reserve communications reveal a number of episodes when the FOMC indicated
that it took a wait-and-see approach to taking further actions or muted a FFR move due to
its uncertainty over the course of the economy or the extent to which the full force of early
policy moves had yet shown through to economic activity and inflation. The record also
indicates several instances when the Committee said its policy stance was taken in part as
insurance against undesirable outcomes; during these times, the FOMC also usually noted
reasons why the potential costs of a policy overreaction likely were modest as compared with
the scenario it was insuring against. And there are a few instances when the Committee
appears to be reacting to head off a potential change in dynamics that might accelerate the
economy into a serious recession.
Two episodes are particularly revealing. The first is the hesitancy of the Committee
to raise rates in 1997 and 1998 to counter inflationary threats because of the uncertainty
generated by the Asian financial crisis and subsequent rate cuts following the Russian default.
The second is the loosening of policy over 2000 and 2001, when uncertainty over the degree
to which growth was slowing and the desire to insure against downside risks appeared to
influence policy. Furthermore, later in the period, the Committees aggressive actions also
seemed to be influenced by attention to the risks associated with the ZLB on interest rates.
Of course, not all references to risk management involved reactions to uncertainty or
insurance-based rationales for policy. For example, at times the FOMC faced conflicting
policy prescriptions for achieving its dual mandate goals for output and inflation. Here, the
Committee generally hoped to set policy to better align the risks to the projected deviations
from the two targetsan interesting balancing act, though not necessarily one in which
higher moments of the distribution of shocks or potential nonlinearities in economic dynamics
had a meaningful influence of policy decisions.
26
The ZLB was not a relevant consideration for most of the historical record we consider
as well as for the empirical work we do later in section 4. Accordingly, this section first
briefly heuristically reviews some theoretical rationales for taking a risk management policy
approach outside of the ZLB that may help shed light on some Federal Reserve actions.
We will then describe the two episodes we find particularly revealing about the use of risk
management in setting rates. The section concludes with two approaches to quantifying the
role of risk management in policy decision-making as it is described in the FOMC minutes.28
3.1
Policymakers have long-emphasized the importance of recognizing uncertainty in their decisionmaking. As Greenspan (2004) put it: (t)he Federal Reserves experiences over the past two
decades make it clear that uncertainty is not just a pervasive feature of the monetary policy landscape; it is the defining characteristic of that landscape. This sentiment seems at
odds with a wide class of models (namely linear-quadratic models) in which optimal policy
involves adjusting the interest rate in response to the mean of the distribution of shocks and
information on higher moments is irrelevant (away from the ZLB). What kinds of factors
cause departures from such conditions and justify the risk-management approach?
Nonlinearities in economic dynamics are one natural way to do so. These can occur in
both the IS and Phillips curves. For example, suppose recessions are episodes when selfreinforcing dynamics amplify the effects of downside shocks.29 This could be modeled as a
dependence of output on lagged output, as in the backward-looking model studied above,
but this dependence may be concave rather than linear. Intuitively, negative shocks have
a more dramatic effect on reducing future output than positive shocks have on increasing
it, and so the greater uncertainty, the more optimal policy will be looser ex-ante to guard
against the more detrimental outcomes. Alternatively, suppose the Phillips curve is convex,
28
We consider the minutes for each meeting from 1993 to 2008. The start date is predicated on the fact
that FOMC minutes prior to 1993 provide little information about the rationale for policy decisions. Hansen
and McMahon (2014) for a detailed discussion of this change in Federal Reserve communications.
29
Classic discussions of special dynamics during recession include Friedman (1969) plucking model and
Hamilton (1989)s regime switching model.
27
perhaps owing to wage rigidities at low inflation. Intuitively, a positive shock to the output
gap leads to a significant increase of inflation above target while a negative shock does not
lead to much of an inflation decline; the larger the possible spread of these shocks, the greater
the relative odds of experiencing a bad inflation outcome. Optimal policy will want to guard
against this, leading to a tightening bias.30
Relaxing the assumption of a quadratic loss function is perhaps the simplest way to generate a rationale for risk-management. The quadratic loss function is justified as being a
local approximation to consumer welfare (Woodford (2003)). However, it might not be a
very good approximation during times when large shocks drive the economy far away from
underlying trend or the quadratic loss function might simply be an inadequate approximation of the way the FOMC actually behaves. Several authors have considered loss functions
with asymmetries, e.g. Surico (2007), Kilian and Manganelli (2008), Dolado, Mara-Dolores,
and Ruge-Murcia (2004). For instance, the latter paper shows that if the policymaker is less
averse to output running above potential than below it, then the optimal policy rule can
involve nonlinear terms in the output and inflation gaps. The relevance of higher moments
of the distribution for shocks for the policy decision is an obvious by-product of these nonlinearities. To the degree that optimal policy depends on the second and third moments of the
shock distributions, both uncertainty and asymmetry may enter the policymaking calculus.
The risk management approach also can be found in the large literature on how optimal
monetary policy should adjust for uncertainty about the true model of the economy. Brainard
(1967) derived the important result that parameter uncertainty over the effects of policy
should lead to additional caution and smaller policy responses to deviations from target, a
principle that is often called gradualism. This principle has had considerable influence on
policymakers, for instance Blinder (1998) or Williams (2013). There also is a more recent
literature that incorporates concern about model miss-specification into optimal monetary
policy analysis. Sometimes this is along the lines of the robust control analysis of Hansen
and Sargent (2008), which often has been interpreted to mean that model uncertainty should
30
The fact that a convex Phillips curve can lead to a role for risk-management has been discussed by
Laxton, Rose, and Tambakis (1999) and Dolado, Mara-Dolores, and Naveira (2005).
28
generate more aggressive policy. However, as explained by Barlevy (2011), both Brainardstyle gradualism and robust-control aggressive policy results depend on the specifics of the
underlying models of the economy, and reasonable alternatives can overturn the baseline
results. Hence, the effect of parameter and model uncertainties are themselves uncertain.
Nonetheless, these analyses often indicate that higher moments of the distribution of shocks
can influence the setting of optimal policy.
With this as background, we now turn to the two historical examples we think are particularly insightful for how the Federal Reserve may have implemented the risk-management
approach to policy.
3.2
19971998
1997 was a good year for the U.S. economy: real GDP increased 3-3/4 percent, the unemployment rate fell to 4.7 percentabout 3/4 percentage point below the Board of Governors
staffs estimate of the natural rateand core CPI inflation was 2-1/4 percent.31 But with
growth solid and labor markets tight, the FOMC clearly was concerned about a buildup in
inflationary pressures. As noted in the Federal Reserves February 1998 Monetary Policy
Report:
The circumstances that prevailed through most of 1997 required that the Federal
Reserve remain especially attentive to the risk of a pickup in inflation. Labor
markets were already tight when the year began, and nominal wages had started
to rise faster than previously. Persistent strength in demand over the year led to
economic growth in excess of the expansion of the economys potential, intensifying the pressures on labor supplies.
Indeed, over much of the period between early 1997 and mid-1998, the FOMC directive
maintained a bias indicating that it was more likely to raise rates to battle inflationary
pressures than it was to lower them. Nonetheless, the FOMC left the FFR unchanged at 5.5
percent from March 1997 until September 1998. Why did it do so?
31
The GDP figure refers to the BEAs third estimate for the year released in March 1998.
29
Certainly the inaction in large part reflected the forecast for economic growth to moderate
to a more sustainable pace as well as the fact that actual inflation had remained contained
despite tight labor market conditions.32 But, in addition, on numerous occasions heightened
uncertainty over the outlook for growth and inflation apparently reinforced the decision to
refrain from raising rates. The following quote from the July FOMC 1997 minutes is a
revealing example:
An unchanged policy seemed appropriate with inflation still quiescent and business activity projected to settle into a pattern of moderate growth broadly consistent with the economys long-run output potential. While the members assessed
risks surrounding such a forecast as decidedly tilted to the upside, the slowing of
the expansion should keep resource utilization from rising substantially further,
and this outlook together with the absence of significant early signs of rising inflationary pressures suggested the desirability of a cautious wait and see policy
stance at this point. In the current uncertain environment, this would afford the
Committee an opportunity to gauge the momentum of the expansion and the
related degree of pressure on resources and prices.
Furthermore, the Committee did not see high costs to waiting and seeing. They thought
any increase in inflation would be slow, and that if needed a limited tightening on top of the
current 5.5 percent funds would be sufficient to reign in any emerging price pressures. This
is seen in the following quote from the same meeting:
The risks of waiting appeared to be limited, given that the evidence at hand
did not point to a step-up in inflation despite low unemployment and that the
current stance of monetary policy did not seem to be overly accommodative
. . . In these circumstances, any tendency for price pressures to mount was likely
to emerge only gradually and to be reversible through a relatively limited policy
adjustment.
32
Based on the FFR remaining at 5.5 percent, the August 2008 Greenbook projected GDP growth to slow
from 2.9 percent in 1998 to 1.7 percent in 1999. The unemployment rate was projected to rise to 5.1 percent
by the end of 1999 and core CPI inflation was projected to edge down to 2.1 percent. Note that core PCE
inflation was much lower than core CPI inflation at this time it was projected at 1.3 percent in 1998 and
1.5 percent in 1999. However, the FOMC had not yet officially adopted the PCE price index as its preferred
inflation measure, nor had it set an official inflation target.
30
Thus, it appears that uncertainty and associated risk management considerations supported
the Committee decision to leave policy on hold.
Of course, the potential fallout of the Asian financial crisis on the U.S. economy was a
major factor underlying the uncertainty about the outlook. The baseline scenario was that
the associated weakening in demand from abroad and a stronger dollar would be enough
to keep U.S. inflationary pressures in check but not be strong enough to cause inflation or
employment to fall too low. As Chairman Greenspan noted in his February 1998 HumphreyHawkins testimony to Congress, there were substantial risks to this outlook, with the delicate
balance dictating unchanged policy:
However, we cannot rule out two other, more worrisome possibilities. On the one
hand, should the momentum to domestic spending not be offset significantly by
Asian or other developments, the U.S. economy would be on a track along which
spending could press too strongly against available resources to be consistent
with contained inflation. On the other, we also need to be alert to the possibility that the forces from Asia might damp activity and prices by more than is
desirable by exerting a particularly forceful drag on the volume of net exports
and the prices of imports. When confronted at the beginning of this month with
these, for the moment, finely balanced, though powerful forces, the members of
the Federal Open Market Committee decided that monetary policy should most
appropriately be kept on hold.
Indeed, by late in the summer of 1998, this balance had changed, as the strains following
the Russian default weakened the outlook for foreign growth and tightened financial conditions in the U.S. The Committee was concerned about the direct implications of these
developments on U.S. financial markets which were already evident in the data as well as
for the real economy, which were still just a prediction. The staff forecast prepared for the
September FOMC meeting reduced the projection for growth in 1999 by about 1/2 percentage point (to 1-1/4 percent), a forecast predicated on a 75 basis point reduction in the FFR
spread out over three quarters. Such a forecast was not a disasterindeed, at 5.1 percent,
the unemployment rate projected for the end of 1999 was still below the Board Staffs estimate of its natural rate inflation. Nonetheless, the FOMC moved much faster than assumed
31
in the staffs forecast, lowering rates 25 basis points at its September and November meetings
as well at an intermeeting cut in October. According to the FOMC minutes, the rate cuts
were made in part as insurance against a worsening of financial conditions and weakening
activity. As they noted in September:
. . . such an action was desirable to cushion the likely adverse consequences on
future domestic economic activity of the global financial turmoil that had weakened foreign economies and of the tighter conditions in financial markets in the
United States that had resulted in part from that turmoil. At a time of abnormally high volatility and very substantial uncertainty, it was impossible to
predict how financial conditions in the United States would evolve. . . . In any
event, an easing policy action at this point could provide added insurance against
the risk of a further worsening in financial conditions and a related curtailment
in the availability of credit to many borrowers.
While the references to insurance are clear, the case also can be made that these policy
moves were in large part made to realign the misses in the expected paths for growth and
inflation from their policy goals. Over this time the prescriptions to address the risks to the
FOMCs dual mandate policy goals were in conflictrisks to achieving the inflation mandate
called for higher interest rates while risks to achieving the maximum employment mandate
called for lower rates.33 As the above quote from Chairman Greenspan indicated, in 1997
the Committee thought that a 5-1/2 percent FFR kept these risks in balance. But as the
odds of economic weakness increased, the Committee cut rates to bring the risks to the two
goals back into balance. As Chairman Greenspan indicated in his February 1999 Monetary
Policy Testimony:
To cushion the domestic economy from the impact of the increasing weakness
in foreign economies and the less accommodative conditions in U.S. financial
33
To quote the February 1999 Monetary Policy Report: Monetary policy in 1998 needed to balance two
major risks to the economic expansion. On the one hand, with the domestic economy displaying considerable
momentum and labor markets tight, the Federal Open Market Committee (FOMC) was concerned about
the possible emergence of imbalances that would lead to higher inflation and thereby, eventually, put the
sustainability of the expansion at risk. On the other hand, troubles in many foreign economies and resulting
financial turmoil both abroad and at home seemed, at times, to raise the risk of an excessive weakening of
aggregate demand.
32
markets, the FOMC, beginning in late September, undertook three policy easings.
By mid-November, the FOMC had reduced the federal funds rate from 5-1/2
percent to 4-3/4 percent. These actions were taken to rebalance the risks to the
outlook, and, in the event, the markets have recovered appreciably.
So were the late 1998 rate moves a balancing of forecast probabilities, insurance against
a downside skew in possible outcomes, or some combination of both? There is no easy
answer. This motivates our econometric work in Section 4 that seeks to disentangle the
normal response of policy to expected outcomes from uncertainty and other related factors
that may have influenced the policy decision.
In the end, the economy weathered the fallout from the Russian default well. In June
1999, the staff forecast projected the unemployment rate to end the year at 4.1 percent and
that core CPI inflation would rise to 2.5 percent by 2000.34 Against this backdrop, the FOMC
decided to increase the FFR to 5 percent. In the event, the staff forecast underestimated
the strength of the economy and underlying inflationary pressures, and the FOMC ended up
executing a series of rate hikes that eventually brought the FFR up to 6.5 percent by May
of 2000.
3.3
20002001
At the time of the June 2000 FOMC meeting, the unemployment rate stood at 4 percent and
core PCE inflation, which the Committee was now using as its main measure of consumer
price inflation, was running at about 1-3/4 percent, up from 1-1/2 percent in 1999. The
staff forecast growth would moderate to a rate near or a little below potential but that
unemployment would remain near its current level and that inflation would rise to 2.3 percent
in 2001and this forecast was predicated on another 75 basis points tightening that would
bring the FFR to 7-1/4 percent by the end of 2000. Despite this outlook, the FOMC decided
to leave rates unchanged. What drove this pause? It seems likely to us that risk management
was an important consideration.
34
This forecast was based on an assumption of the FFR gradually moving up to 5-1/4 percent by the first
quarter of 2000.
33
In particular, the FOMC appeared to want to see how uncertainty over the outlook
would play out. First, the incoming data and anecdotal reports from Committee members
business contacts pointed to a slowdown in growth, but the degree of the slowing was not
clear. Second, rates had risen substantially over the past year, and given the lags from
policy changes to economic activity, it was unlikely that the full effects of the hikes had
yet been felt. Given the relatively high level of the FFR and the slowdown in growth that
appeared in train, the Committee seemed wary of over tightening. Third, despite the staff
forecast, the FOMC apparently considered the costs of waiting in terms of inflation risks to
be small. Accordingly, they thought it better to put a rate increase on hold and see how the
economy developed. The June 2000 minutes contain a good deal of commentary supporting
this interpretation:35
The increasing though still tentative indications of some slowing in aggregate
demand, together with the likelihood that the earlier policy tightening actions
had not yet exerted their full retarding effects on spending, were key factors
in this decision. The uncertainties surrounding the outlook for the economy,
notably the extent and duration of the recent moderation in spending and the
effects of the appreciable tightening over the past year . . . reinforced the argument
for leaving the stance of policy unchanged at this meeting and weighting incoming
data carefully. . . .Members generally saw little risk in deferring any further policy
tightening move, particularly since the possibility that underlying inflation would
worsen appreciably seemed remote under prevailing circumstances. Among other
factors, inflation expectations had been remarkably stable despite rising energy
prices, and real interest rates were already relatively elevated.
Moving through the second half of 2000, it became increasingly evident that growth had
slowed to a pace somewhat below trend and may in fact have been poised for even more
35
This was not the first time the Committee had invoked such arguments during the tightening cycle.
In October 1999 the FOMC left rates unchanged in part over uncertainty over the economic outlook. And
in the February and March 2000 meetings they opted for small 25 basis point cuts because of uncertainty.
As stated in the July 2000 Monetary Policy Report to Congress regarding the smaller moves in February
and March: The FOMC considered larger policy moves at its first two meetings of 2000 but concluded
that significant uncertainty about the outlook for the expansion of aggregate demand in relation to that of
aggregate supply, including the timing and strength of the economys response to earlier monetary policy
tightenings, warranted a more limited policy action.
34
pronounced weakness. Furthermore, inflation was moving up at a slower pace than the staff
had projected in June. In response, the Committee held the FFR at 6.5 percent through
the end of 2000. But the data around the turn of the year proved to be weaker than the
Committee had anticipated. In a conference call on January 3, 2001, the FOMC cut the FFR
to 6 percent and lowered it again to 5-1/2 percent at the end-of-month FOMC meeting.36
In justifying the aggressive ease, the Committee stated:
Such a policy move in conjunction with the 50 basis point reduction in early January would represent a relatively aggressive policy adjustment in a short period
of time, but the members agreed on its desirability in light of the rapid weakening in the economic expansion in recent months and associated deterioration
in business and consumer confidence. The extent and duration of the current
economic correction remained uncertain, but the stimulus . . . would help guard
against cumulative weakness in economic activity and would support the positive factors that seemed likely to promote recovery later in the year . . . In current
circumstances, members saw little inflation risk in such a front-loaded easing policy, given the reduced pressures on resources stemming from the sluggish
performance of the economy and relatively subdued expectations of inflation.
According to this quote, not only was the actual weakening in activity an important
consideration in the policy decision, but uncertainty over the extent of the downturn and
the possibility that it might turn into an outright recession seemed to spur the Committee
to make a large move. The help guard against cumulative weakness and front-loaded
language could be read as the Committee taking out some additional insurance against the
possibility that the weakening activity would snowball into a recession. Indeed this could
have reflected a concern about the kinds of non-linear output dynamics or perhaps nonquadratic losses associated with a larger recession that we discuss in Section ??.
The FOMC steadily brought the FFR down further over the course of 2001 against the
backdrop of weakening activity. Still, though the economy seemed to be skirting a recession.
36
At that time the Board staff was forecasting that growth would stagnate in the first half of the year, but
that the economy would avoid an outright recession even with the FFR at 5.75 percent. Core PCE inflation
was projected to rise modestly to a little under 2.0 percent.
35
Then the tragic events of September 11 occurred. There was, of course, huge uncertainty
over how international developments, logistics disruptions, and the sentiment of households,
businesses, and financial markets would affect spending and production. By November the
Board staff was forecasting a modest recession: Growth in the second half of 2001 was
projected to decline 1-1/2 percent at an annual rate and rise at just a 1-1/4 percent rate in
the first half of 2002. By the end of 2002 the unemployment rate was projected to rise to
6.1 percent and core PCE inflation was projected to be 1-1/2 percent. These forecasts were
predicated on the FFR remaining flat at 2-1/4 percent.
The FOMC, however, was worried about something more serious than the shallow recession forecast by the Staff. Furthermore, a new risk came to light, namely the chance that
disinflationary pressures might emerge, that, once established, would be more difficult to
fight with the FFR already low. In response, the Committee cut the FFR 50 basis points
in a conference call on September 17 and again at their regular meetings in October and
November. As earlier in the year, they preferred to act aggressively. As noted in the minutes
from the November 2001 FOMC meeting:
. . . members stressed the absence of evidence that the economy was beginning
to stabilize and some commented that indications of economic weakness had
in fact intensified. Moreover, it was likely in the view of these members that
core inflation, which was already modest, would decelerate further. In these
circumstances insufficient monetary policy stimulus would risk a more extended
contraction of the economy and possibly even downward pressures on prices that
could be difficult to counter with the current federal funds rate already quite
low. Should the economy display unanticipated strength in the near term, the
emerging need for a tightening action would be a highly welcome development
that could be readily accommodated in a timely manner to forestall any potential
pickup in inflation.
This passage suggests that the large cuts were not only aimed at preventing the economy
from falling into a serious recession with deflationary pressures, but that the Committee was
also concerned that such an outcome could be difficult to counter with the current funds
36
rate already quite low. Accordingly, the aggressive policy moves could in part also have
reflected insurance against the future possibility of running into the ZLB, precisely the policy
scenario and optimal policy prescription described in Section 2.
3.4
Clearly, the minutes contain many references to the Committee noting that uncertain economic conditions influenced their policy decision and times when insurance was cited as a
reason to alter the stance of policy one way or the other. But did these references actually
lead to deviations from the canonical policy response based on simple point forecasts for the
outlook? In this section we take a systematic approach to quantifying these considerations
into variables that can be used in empirical work to address this question.
In the spirit of the narrative approach pioneered by Romer and Romer (1989), we built
judgmental indicators based on our reading of the minutes. We concentrated on the paragraphs that describe the Committees rationale for its policy decision, reading these passages
for references to when insurance considerations or uncertainty over the economic environment
or the efficacy of current or past policy moves appeared closely linked to the FOMCs decision. Other portions of the minutes were excluded from our analysis for example, the parts
that cover staff and participants views of current and prospective economic and financial
developments in order to better isolate arguments that directly influenced the meetings
policy decision from more general discussions of unusual data or normal forecast uncertainty.
We constructed two separate indicator variables one for uncertainty (hUnc) and one
for insurance (hIns).37 The uncertainty variable was coded to plus one if we judged that
the Committee positioned the FFR higher than it otherwise would due to uncertainty. We
coded a minus one if it appeared that uncertainty led the FOMC to put rates lower than they
otherwise would be. If uncertainty did not appear to be an important factor influencing the
policy decision, we coded the indicator as zero. We similarly coded the insurance variable by
identifying when the minutes cited insurance against some adverse outcome as an important
37
37
consideration in the Committees decision, again with a value of one meaning rates were
higher and a value of minus one meaning they were lower than they otherwise would have
been.38 Since these two variables were never coded differently from zero for the same meeting
we also consider their sum (hSum).
As an example of our coding, consider the June 2000 pause in rate hikes discussed above.
Though they generally thought policy had to tighten, the Committee was uncertain about
how much growth was slowing and the degree to which their past tightening actions had
yet shown through to economic activity. Accordingly, the FOMC decided to wait and assess
further developments before taking additional policy action. This is clear from the sections
of the minutes highlighted in italics:
The increasing though still tentative indications of some slowing in aggregate
demand, together with the likelihood that the earlier policy tightening actions
had not yet exerted their full retarding effects on spending, were key factors in
this decision. The uncertainties surrounding the outlook for the economy, notably
the extent and duration of the recent moderation in spending and the effects of
the appreciable tightening over the past year, including the 1/2 percentage point
increase in the intended federal funds rate at the May meeting, reinforced the
argument for leaving the stance of policy unchanged at this meeting and weighting
incoming data carefully.
We coded this meeting as a minus one for our uncertainty measure rates were lower because
uncertainty over the economic outlook and the effects of past policy moves appear to have
been an important factor in the Committee deciding not raising rates when they otherwise
would have.
However, we did not code all mentions of uncertainty as a one or minus one. For example,
in March 1998a meeting when the FOMC did not change rates despite some concern over
higher inflationthe Committee did refer to uncertainties over the economic outlook and say
38
A value of one for either variable could reflect the Committee raising rates by more or lowering rates
by less than they would have if they ignored uncertainty or insurance or a decision to keep the FFR at its
current level when a forecast-only call would have been to lower rates. Similarly, a value of minus one could
occur if the FOMC either lowered rates them by more or increased them less than they otherwise would or
if the Committee left rates unchanged when they otherwise would have raised them.
38
that it could wait for further developments before tightening. The FOMC had held the FFR
flat at 5.5 percent for about a year, and so was not obviously in the midst of a tightening
cycle; the baseline forecast articulated in the policy paragraphs seemed consistent with the
current FFR setting; and the commentary over the need to tighten was in reference to an
indefinite point in the future as opposed to the current or subsequent FOMC meeting. So,
in our judgment, uncertainty did not appear to be a very important factor holding back a
rate increase at this meeting and we coded this date as a zero. Quoting the minutes (again,
with our emphasis added):
The members agreed that should the strength of the economic expansion and
the firming of labor markets persist, policy tightening likely would be needed at
some point to head off imbalances that over time would undermine the expansion
in economic activity. Most saw little urgency to tighten policy at this meeting,
however. The economy might well continue to accommodate relatively robust
economic growth and a high level of resource use for an extended period without
a rise in inflation . . . On balance, in light of the uncertainties in the outlook and
given that a variety of special factors would continue to contain inflation for a
time, the Committee could await further developments bearing on the strength of
inflationary pressures without incurring a significant risk that disruptive policy
actions would be needed later in response to an upturn in inflation and inflation
expectations.
Of course, coding the minutes in this way is inherently subjective and there is no definitive
way to judge the accuracy of the decisions we made. So we also constructed objective measures of how often references to uncertainty or insurance appeared in the policy paragraphs
of the minutes. In particular we constructed conditional measures which count the percentage of sentences containing words related to uncertainty or insurance in conjunction with
references to economic activity or inflation. The words we used to capture uncertainty are
uncertainty, uncertain, uncertainties, question and questions. To capture insurance we used insurance, ensure, assurance and risk management. The conditioning
words for inflation were inflation, prices, deflation, disinflation, cost and costs.
To condition on activity we used activity, growth, slack, resource, resources, la39
bor and employment. We combined the counts for uncertainty and insurance into the
two variables mUnc and mIns.39
-1
Indicator
Percent of Sentences
10
20
30
Figures 4 and 5 show plots of these uncertainty and insurance measures. Non-zero values
of the indicator variables are indicated by orange circles and the blue bars indicate the
word counts. For the word counts we have added together the conditional measures, so for
example the insurance word counts reflect mentions of insurance words and at least one of
the conditioning words for inflation and activity. Not surprisingly, dealing with uncertainty
is a regular feature of monetary policy decision making. The uncertainty indicator turns
on in 31 out of the 128 meetings between 1993 and 2008. Indications that insurance was
a factor in shading policy are not as common, but still show up 14 times in the indicator.
Most of the time24 for uncertainty and 11 for insuranceit appears that rates were set
39
40
-1
Indicator
Percent of Sentences
5
15
10
20
lower than otherwise would have been to account for these factors.
The sentence counts and indicator variables do not line up perfectly. Sometimes the
indicator variables are reflected in the sentence counts but sometimes they are not. There
are also meetings where the sentence counts are positive but we did not judge them to indicate
that rates were set differently than they normally would. For example, in March of 2007,
our judgmental measure does not code uncertainty as being an important factor putting rates
higher or lower than they otherwise would be whereas the sentence count finds uncertainty
referenced in nearly one-third of the sentences in the policy section of the minutes. Incoming
data on economic activity were soft, and the Committee was uncertain over the degree to
which the economy was weakening. At the same time, they had a good deal of uncertainty on
whether their expected decline in inflation which was running uncomfortably high at the
time actually would materialize. In the end, they only removed the bias in the statement
41
towards further tightening, and did not adjust policy one way or the other in response to the
conflicting uncertainties. Hence the judgmental indicator did not code policy being higher
of lower than it otherwise would be due to uncertainty.
At other times, the word count was a more simple misread of the Committees intentions.
For example, in March 2000 the word count identified an insurance coding since it found
the word ensure in the policy portion of the minutes. However, this turned out not to
be associated with the current policy decision, but a comment with regard to the possible
need to increase rates in the future to ensure inflation remains contained, and hence was not
coded in our judgmental insurance indicator.
We conclude this section by discussing why we did not attempt to use the minutes to
measure any variables for risk management per se. The minutes often contain discussions of
risks to the Committees dual mandate goals. But when not accompanied by references to
uncertainty or insurance, the risk management language may simply describe policy settings
that balance conflicting risks to the outlooks of output and inflation relative to their implicit
targets. Such policy moves may just be adjusting the expected losses along output and
inflation paths in a balanced fashion as is predicted by the canonical framework for studying
optimal policy under discretion. This was the issue we discussed in our earlier narrative of
the 1997-1998 period.40
Some references to risk do appear to refer to circumstances that moved policy actions
away from the norm. Nevertheless we coded our indicator variables to zero for some of
these meetings as well. March 2008 provides an example. At that meeting the staff was
projecting a mild recession followed by a fair-sized recovery. This forecast was conditioned
on a relatively aggressive cut in the FFR; and in the event, the Committee lowered rates be
40
Indeed, for much of our sample period, the Committee discussed risks about the future evolution of
output or inflation relative to target in order to signal a possible bias in the direction of upcoming rate
actions. For example, in the July 1997 meeting described earlier, the minutes indicate members . . . wanted
to retain the existing asymmetry toward restraint . . . An asymmetric directive was consistent with their view
that the risks clearly were in the direction of excessive demand pressures . . . Since the Committee delayed
tightening at this meeting, this risk reference communicated that the risks to price stability presented by
the baseline outlook would likely eventually call for rate increases. It is not a reference that uncertainty or
a skew in the distribution of possible inflation outcomes should dictate some non-standard policy response.
42
25 basis points more than the staff had assumed. The minutes state:
. . . most members judged that a substantial easing in the stance of monetary policy was warranted at this meeting. The outlook for economic activity had weakened considerably since the January meeting, and members viewed the downside
risks to economic growth as having increased. Indeed, some believed that a
prolonged and severe economic downturn could not be ruled out . . .
The Committees actions and this reference to risk appears, like the January 2001 example
cited above, to be a concern about nonlinear recessionary dynamics or non-quadratic losses.
According to our theory, such factors should elicit a non-standard policy response. However,
while there were references to elevated uncertainty about growth and inflation in the March
2008 minutes it was not clear to us that policy was tilted one way or another in response to
these uncertainties. Therefore we coded our indicator variables to zero for this meeting.
Given these disparate uses of the term risk in the minutes, we do not confine our
empirical analysis to judgmental coding or sentence counts from the FOMC minutes. We
consider other variables not specific to the minutes to try to uncover when asymmetries about
point forecasts or unusual surprises in the outlook generated an atypical policy response. We
discuss how we do this in below in Section 4.2.
We have uncovered clear evidence that risk management considerations have been influential in determining the FOMCs policy stance. This suggests that a proposal to guide policy
today by managing the risk of returning to the ZLB is not inconsistent with the words of the
Committee. But, it is not clear at this stage whether risk management has had a material
impact on the FOMCs FFR choices. Have the words of the FOMC been reflected in its
deeds? If the answer to this question is no then any proposal to incorporate risk management into policy-making in the current environment would be harder to justify. Therefore
in this section we quantify the effect risk management has had on monetary policy in the
pre-ZLB era.
43
We estimate monetary policy reaction functions of the kind studied in Clarida, Gali,
and Gertler (2000) and many other papers. These have the FFR set as a linear function of
output gap and inflation forecasts; there is no role for risk management unless it feeds directly
into these forecasts. To quantify the role of risk management beyond any direct influence
it has on forecasts we add to the reaction function variables that proxy for it. Since we
focus on a pre-ZLB sample a test for the statistical significance of a given proxys coefficient
amounts to a test of whether the FOMC has followed the normative prescriptions of the
theories mentioned in the introduction to Section 3.41 However an insignificant coefficient
cannot be interpreted as evidence against a role for risk management precisely because it can
influence forecasts.42 Considering a broad array of proxies we find plenty of evidence that
risk management has had a statistically and economically significant impact on monetary
policy beyond any direct effects on the forecast.43
4.1
Empirical Strategy
Let Rt denote the notional target for FFR in period t. We assume the FOMC uses the
following rule for setting this target:
Rt = R + (Et [t,k ] ) + Et [xt,q ] + st ,
(10)
where t,k denotes the average annualized inflation rate from t to t + k, is the FOMCs
target for inflation, xt,q is the average output gap from t to t+q, st is a risk management proxy
and Et is the expectations operator conditional on information available to the FOMC at
date t. The coefficients , and are assumed to be fixed over time. R is the central banks
41
A given theory could be correct and our tests indicate no effects of uncertainty on policy because the
Fed has not conducted optimal policy. Testing the positive implications of the theories discussed in Section
?? is beyond the scope of this paper.
42
For example, in our discussion of the expectations channel in Section 2.1 uncertainty over the future
natural rate has a direct impact on optimal policy, but the expected value of the output gap is a sufficient
statistic for this uncertainty.
43
There is a large literature that examines non-linearities in policy reaction functions (see Gnabo and
Moccero (2014), Mumtaz and Surico (2015), and Tenreyro and Thwaites (2015) for reviews of this literature
and recent estimates), but surprisingly little work that speaks directly to risk management. We discuss the
related literature below.
44
desired nominal rate when inflation is at target, the output gap is closed and uncertainty
does not influence policy, = 0. Assume the average output and inflation gaps both equal
zero. Furthermore suppose the FOMC acts as if the natural rate is constant and out of its
control. Then R = r + , where r is the natural rate.44
Our estimation equation embodies two additional assumptions. First, the FOMC has a
preference for interest rate smoothing and so does not choose the FFR to hit its notional
target instantaneously. Second, the FOMC does not have perfect control over interest rates.
This motivates the following specification for the effective target FFR, Rt
Rt = (1 a
)Rt + A(L)Rt1 + t
(11)
A(L) =
N
1
X
aj+1 L and 0 a
j=0
N
1
X
aj+1 < 1,
j=0
with N denoting the number of FFR lags. The variable t is a mean zero and serially
independent interest rate shock. Combining (10) and (11) yields our estimation equation:
Rt = b0 + b1 Et [t,k ] + b2 Et [xt,q ] + A(L)Rt1 + b3 st + t .
(12)
There is no presumption that (10) reflects optimal policy and so assuming a constant natural rate is not
inconsistent with our theoretical analysis. We explored using forecasted growth in potential output derived
from Board staff forecasts to proxy for the natural rate and found this did not affect our results.
45
Gnabo and Moccero (2014) estimate policy reactions functions using staff forecasts as well. The forecasts
are obtained from the Federal Reserve Bank of Philadelphia public web site.
45
use all these observations.46 We study our other proxies at the quarterly frequency. In these
cases our results are based on staff forecasts corresponding to FOMC meetings closest to the
middle of each quarter. We measure Rt in meeting-based cases with the FFR announced
at the end of the meeting and in the quarterly cases with the average FFR over the 30
days following a meeting. Because our measures of Et [t,k ] and Et [xt,q ] are based solely on
information available before an FOMC it follows that we can obtain consistent estimates of
, and by estimating (12) by ordinary least squares, as long as there are sufficient lags
in Rt to ensure that the errors t are serially uncorrelated.47
As discussed above we test the null hypothesis that risk management does not impact the
setting of the FFR by estimating (12) and determining whether is significantly different
from zero. We do not allow for the coefficients on the macroeconomic forecasts in the reaction
function to depend on uncertainty as suggested by the work of Brainard (1967) and others.
However as we (will) show in the appendix that if these coefficients are linear functions of
uncertainty then the null hypothesis = 0 encompasses the hypothesis that uncertainty
does not affect the reaction function coefficients.
4.2
In addition to our FOMC-minutes-based variables we consider several proxies for risk management that do not rely on our interpretation of what the words in the FOMC meetings
mean. Two of these variables are derived from the Board staffs forecast seen by the FOMC
at their regular meetings and so we study them along with our other meeting-based proxies;
the remaining ones are studied at the quarterly frequency. These latter variables can be
divided into two groups based on our assessment of whether they primarily reflect variance
or skewness in the forecast.
The two meeting-based proxies involve revisions to the Board staffs forecasts for the
46
In this estimation the time between meetings is held constant even though this is not true in practise.
We account for this discrepancy when we calculate standard errors by allowing for heteroskedasticity.
47
We make no attempt to address the impact of the possibility of hitting the ZLB in our estimation. See
Chevapatrakul, Kim, and Mizen (2009) and Kiesel and Wolters (2014) for papers that do this.
46
output gap (frGap) and core CPI inflation (frInf). The revisions correspond to changes in
the forecast for the same one year period made between meeting t and t 1. These variables
allow us to assess whether shocks that move forecasts a large amount elicit an extra policy
response. If the Committee was only worried about the effects of unusual events on its point
forecast, then the post-shock projection of the output or inflation gaps would be sufficient
to describe the policy setting. But, if a large forecast revision also signals an asymmetric
weight on outcomes in the direction of the shock and the FOMC wanted to insure against
those outcomes, we could observe effects of revisions on the FFR beyond their influence on
the point forecast.
Two of the quarterly proxies are based on financial market data: VIX and Spread. VIX
is the well-known measure of market participants expectations of volatility of the S&P 500
stock index over the next 30 day period.48 This variable possibly confounds uncertainty due
to financial factors that could be unrelated to the outlook for the economy. However, the
S&P 500 reflects expectations of earnings so that VIX should measure market participants
uncertainty about the outlook for the economy over horizons relevant to the FOMC.49 Spread
is simply the difference between the quarterly average of daily yields on BAA corporate bonds
and 10 year Treasury bonds. Gilchrist and Zakrajsek (2012) demonstrate that this variable
measures information on private sector default risk plus other factors that may indicate
downside risks to economic growth.50
The remaining proxies we look at are based on the Survey of Professional Forecasters
(SPF). The SPF surveys professional forecasters about their point forecasts of GDP growth
and GDP deflator inflation and their probability distributions for these forecasts.51 We use
48
This is the Chicago Board Options Exchange Market Volatility Index obtained from the Haver Analytics
database.
49
Bekaert, Hoerova, and Lo Duca (2013) study the impact of innovations to VIX on the FFR in a VAR
setting. They find a positive innovation to VIX leads to looser policy although their findings are not very
robust and only weakly significant. Gnabo and Moccero (2014) find that policy responds more aggressively
and the degree of inertia in policy is lower in periods of high economic risk as measured by VIX.
50
Alcidi, Flamini, and Fracasso (2011), Castelnuovo (2003), Gerlach-Kristen (2004) consider the role of
Spread in estimates of monetary policy reaction functions.
51
The forecast distributions are for growth and inflation in the current and following year. We use DAmico
and Orphanides (2014)s procedure to translate these distributions into distributions for GDP growth and
inflation over the next four quarters. Note that the bins forecasters are asked to put probability mass on are
47
both kinds of information to construct measures of variance and skewness in the economic
outlook one year ahead. We measure variance using the median among forecasters of the
standard deviations calculated from each individual probability distribution (vGDP and
vInf) and the interquartile range of point forecasts across individuals (PFvGDP and PFvInf.)52 PFvGDP and PFvInf are properly thought of as measuring forecaster disagreement,
but there is a large literature that uses forecaster disagreement as a proxy for variance53 To
measure skewness we use the median of the individual forecasters mean less the mode (sGDP
and sInf) and difference between the mean and the mode of the cross-forecaster distribution
of point forecasts (PFsGDP and PFsInf).
All samples run to the end of 2008 due to the onset of the ZLB, but begin at different
dates according to the number of observations available for each variable. The benchmark
start date is determined by the beginning of Alan Greenspans tenure as Chairman of the
FOMC in 1987, but later dates are used for two sets of variables due to limitations of the
data. The first observation for the meeting-based indicators is the first FOMC meeting of
1993 for the reason discussed in Section 3. When considering the proxies based on forecast
distributions of individual forecasters from the SPF the first observation is 1992q1 due to a
discrete change in SPF methodology that occurs then.54
Tables 4 and 5 display various summary statistics for Board staff forecasts of inflation and
the output gap and the various proxies for risk management at the meeting and quarterly
frequencies. Several entries in these tables are worth highlighting. First, the staffs forecasts
of the output gap and inflation are essentially uncorrelated. Second, Spread is strongly
negatively correlated with the forecast variables, consistent with it indicating downside risks
1 percentage point wide, so moments calculated using them may contain substantial measurement error.
52
Gnabo and Moccero (2014) study the effects of PFvInf on monetary policy but do not find statistically
significant effects.
53
However there is not a consensus about how good a proxy it is. See Baker, Bloom, and Davis (2015) for
a recent review of the relevant literature. Note that we do not use the measure of uncertainty constructed
by Baker et al. (2015) in our analysis since it includes information reflecting uncertainty about monetary
policy.
54
In 1992 the SPF narrows the bins it uses to summarize the forecast probability distributions of individual
forecasters. See DAmico and Orphanides (2014) and Andrade, Ghysels, and Idier (2013) for attempts to
address this change in bin sizes.
48
Obs.
165
165
128
128
128
128
164
165
Mean
-0.43
2.88
-0.13
-0.06
2.92
0.83
-0.02
-0.02
Std. Dev.
1.75
0.95
0.48
0.33
4.8
2.45
0.21
0.42
Min Max
-4.97 3.08
1.3
5.60
-1
1
-1
1
0
30.77
0
16.67
-0.68 0.70
-2
0.83
Inflation
1.00
-0.03
-0.23
0.18
-0.06
-0.10
0.08
0.02
Output gap
1.00
-0.33
0.15
0.14
0.08
0.04
0.29
Note: See the text for a description of the sample periods. There is a missing value at the start of the
sample for frInf.
Obs.
83
83
83
68
68
83
83
83
68
68
83
83
Mean
2.9
-0.43
20.48
0.74
0.9
0.6
0.71
2.11
0.05
-0.1
0.06
0.29
Std. Dev.
0.98
1.72
7.92
0.06
0.12
0.18
0.25
0.66
0.08
0.19
0.21
0.27
Min
1.33
-4.4
10.58
0.6
0.67
0.24
0.3
1.37
-0.12
-0.54
-0.5
-0.5
49
Max Inflation
5.32
1.00
3.08
-0.02
62.09
-0.15
0.9
-0.22
1.3
-0.22
1.1
0.26
1.62
0.30
5.60
-0.37
0.3
0.23
0.47
-0.10
0.51
0.01
0.9
-0.31
Output Gap
1.00
0.06
-0.08
0.22
-0.36
-0.04
-0.34
-0.12
-0.48
-0.23
0.23
to the economy. Third, the remaining variables mostly display weak correlations with the
staff forecasts, with two notable exceptions. Inflation forecast disagreement (PFuInf) has
a large negative correlation with the output gap forecast: periods when the staffs outlook
for activity is deteriorating often correspond to periods when there is a large amount of
disagreement about the outlook for inflation. In addition, skewness in forecasters GDP
forecasts (sGDP) is strongly negatively correlated with the outlook for activity. This suggests
private sector forecasters are slower than the Board staff to react to downside risks to activity.
Table 6: Cross-correlations of FOMC-meeting-based risk management variables
Variables
hIns
mUnc
mIns
frInf
frGap
PFuGDP
0.37
-0.14
0.16
-0.13
0.00
Spread
sInf
sGDP
PFsInf
-0.18
0.43 -0.15
-0.08 0.04
0.01 -0.08
0.15
-0.17
-0.19
Tables 6 and 7 display cross-correlations of the FOMC-meeting-based and quarterly proxies, respectively. As suggested by Figures 4 and 5 the human and machine coded indicator
variables for inflation and uncertainty are essentially uncorrelated. These variables also appear unrelated to the forecast revision variables. Several correlations among the quarterly
proxies are worth noting. Forecaster uncertainty and disagreement about the GDP growth
outlook (vGDP and PFvGDP) are both positively correlated with VIX and Spread, sug50
gesting the financial variables are good indicators of uncertainty about the activity outlook.
Interestingly the correlation of VIX with the inflation asymmetry variables (sInf and PFsInf)
are both negative: when markets perceive a lot of uncertainty in the stock market the inflation outlook is skewed to the downside. The correlation of vGDP with vInf and PFvGDP
with PFvInf are both fairly large suggesting uncertainty about inflation and GDP often
move together. Finally, the correlations of the corresponding forecaster uncertainty and disagreement variables (vGDP with PFvGDP and vInf with PFvInf) are somewhat large too.
Evidently disagreement among forecasters is similar to the median amount of uncertainty
they see.
4.3
Table 8 shows our policy rule estimates with and without the various FOMC-meeting-based
variables; Tables 9 and 10 show estimates with and without the variance and skewness
variables. The tables have the same layout with the first columns showing the policy rule
estimates without any risk management variables and the other columns show the results of
estimating the policy rule adding one of the risk management variables at a time with the
indicated coefficient estimate corresponding to in (10). In the policy rules without any risk
management variables the coefficient on inflation () is about 1.8 and on the output gap ()
is about 0.8. These estimates are highly significant and are similar to estimated forecastbased policy rules in the literature. Introducing one of the proxies for risk management
typically moves the reaction coefficients very little with a couple of exceptions noted below.
Table 8 indicates that the human coding of uncertainty (hUnc) is statistically significant
at the 5% level. The coefficient indicates that on average when uncertainty has shaded the
policy decision above or below what would be dictated by the forecast (as determined by our
analysis of the FOMC minutes) it has moved the notional target (see 10) by about 50 basis
points (bps). With interest rate smoothing the immediate impact is much smaller; the 95%
confidence is 2-14 bps. The machine coding of uncertainty is significant at the 10% level but
the effect is small a one standard deviation increase in the number of sentences we associate
51
with uncertainty raises the notional target by 14bps.55 The insurance indicators (hUnc and
mUnc) are not significant, but the point estimate of the human coded variable indicates its
affects are similar to its uncertainty counterpart. Perhaps the most striking result in Table
8 is the large and highly significant coefficient on the output gap forecast revision variable
(frGap).56 The point estimate indicates a one standard deviation (42 bps) positive surprise
in the forecast raises the notional target by 60 bps over and above the impact this surprise
has on the forecast itself; the confidence interval is 25-100 bps.
Table 8: FOMC meeting indicators in monetary policy rules
(1)
1.76
(2)
1.95
(3)
1.86
(4)
1.90
(5)
1.89
(6)
1.74
(7)
1.76
(.11)
(.16)
(.17)
(.17)
(.17)
(.12)
(.11)
.78
.88
.85
.83
.85
.71
.78
(.06)
(.05)
(.05)
(.05)
(.05)
(.07)
(.07)
hUnc
.40
(.16)
hIns
.48
(.45)
.03
mUnc
(.01)
mIns
-.0003
(.03)
1.59
frGap
(.53)
frCore
.50
(.68)
Obs.
LM
165
.11
128
.07
128
.59
128
.58
128
.31
165
.49
164
.16
The machine coded variables are harder to interpret than the human coded indicators since they do
not account for whether the mentions of uncertainty or insurance shaded policy up or down. It would be
interesting to examine whether a more sophisticated parsing of the words would yield stronger results.
56
The magnitude and significance of this coefficient is partly driven by the observations from 2008, but
not entirely. Ending the sample in 2007 lowers the point estimate to 1.1 but this remains significant at the
5% level.
52
(1)
1.79
(2)
1.74
(3)
2.21
(4)
2.13
(5)
1.82
(6)
1.91
(.12)
(.12)
(.17)
(.16)
(.12)
(.14)
.79
(.06)
VIX
.84
(.06)
-.05
.78
(.07)
.77
(.06)
.76
(.07)
.80
(.06)
(.01)
3.47
vInf
(1.67)
vGDP
.26
(.98)
PFvInf
-.54
(.67)
-1.43
PFvGDP
(.53)
Obs.
LM
83
.70
83
.80
68
.71
68
.59
83
.70
83
.92
Table 9 shows strong evidence that risk management has shaded policy away from that
predicted by forecasts alone. The coefficient on VIX is highly significant and enters with
a negative sign. A one standard deviation increase in VIX lowers the notional target FFR
by 40bps. Consistent with this, the coefficient on GDP forecast uncertainty as measured
using point forecast dispersion (PFvGDP) is also highly significant and indicates roughly
the same affect on the target. The coefficient on the variable measuring how forecasters
view the uncertainty in their inflation forecasts (vInf) is significant. In this case uncertainty
shades the policy higher; a standard deviation increase in vInf raises the notional FFR target
by about 25bps.
Similarly strong evidence that skewness matter for policy decisions is indicated in Table
10. The coefficients on Spread (the interest rate spread indicator of downside risks to activity), sInf (skewness in the outlook for inflation measured from forecasters own forecast
53
(1)
1.79
(2)
1.59
(3)
2.02
(4)
2.09
(5)
1.80
(6)
1.75
(.12)
(.12)
(.16)
(.16)
(.11)
(.13)
.79
(.06)
.71
.80
(.06)
(.07)
.74
(.08)
.87
(.08)
.80
(.07)
Spread
-.82
(.21)
2.80
sInf
(1.18)
sGDP
-.79
(.58)
1.80
PFsInf
(.64)
PFsGDP
-.46
(.47)
Obs.
LM
83
.70
83
.94
68
.34
68
.67
83
.84
83
.72
distributions) and PFsInf (skewness in the inflation outlook measured across point forecasts)
all enter into the policy rule significantly. An increase in downside risks to activity lowers
while an increase in upside risks to inflation raises the FFR. The effects seem large. Standard
deviation increases in these proxies change the notional target by 50, 25 and 40 bps, respectively. The point estimates for skewness in the GDP outlook come in with unexpectedly
negative signs (perceived upside risks to the growth suggest lowering the FFR). However
these coefficients are relatively small and insignificant (standard deviation changes translate
to no more than 15 bp changes in the notional target.) The standard errors are large enough
that the expected sign cannot be ruled out.
Taken together, these results suggest that risk management concerns broadly conceived
have had a statistically and economically significant impact on policy decisions over and
above how those concerns are reflected in point forecasts. Risk management does not just
54
appear in the words of the FOMC it is apparently reflected in their deeds as well.
Conclusion
Our analysis has so far ignored two reputational issues that may be relevant to the lift
off calculus. A policymaker must take into account the effect that shocks might have on
her reputation; in particular, policymakers may face large costs of reversing a decision.
Empirically, it is well known that central banks tend to go through tightening and easing
cycles which in turn induce substantial persistence in the short-term interest rate. One
reason why policymakers might be reluctant to reverse course is that it would damage their
reputation, perhaps because the public would revise its confidence in the central banks
ability to understand and stabilize the economy. With high uncertainty, this reputation
element would lead to more caution. In the case of liftoff, it would argues for a longer delay
in raising rates to avoid the reputational costs of a reversion back to the ZLB.
Another reputational concern is the signal the public might infer about the central banks
commitments to its stated policy goals if liftoff occurred with output or inflation still far below
target. Large gaps on their own pose no threat to credibility if the public is confident that the
economy is on a path to achieve the policy targets in a reasonable period of time and that the
central bank is willing to accommodate this path. However, if there is elevated uncertainty
over the strength of the economy, or a view that risks were skewed to the downside, then the
public might believe that a risk management approach should delay liftoff. And as we found,
the larger the current gaps, the more relevant these risk management concerns. Accordingly,
early liftoff might be construed as backing away from appropriate risk management, and
hence a less-than-enthusiastic endorsement of the ultimate policy targets by the central
bank. This could work in the opposite direction for an economy in which the uncertainties
or asymmetries underlying the risk management approach dictated an aggressive tightening
move to guard against inflation. Either way, the central banks reputation as having a
credible commitment to achieving its policy goals could brought into question.
55
References
Adam, K. and R. M. Billi (2007). Discretionary monetary policy and the zero lower bound
on nominal interest rates. Journal of Monetary Economics 54 (3), 728752.
Alcidi, C., A. Flamini, and A. Fracasso (2011). Policy regime changes, judgment and taylor
rules in the greenspan era. Economica 78, 89107.
Andrade, P., E. Ghysels, and J. Idier (2013). Tails of inflation forecasts and tales of monetary
policy. UNC Kenan-Flagler Research Paper No. 2013-17.
Baker, S. R., N. Bloom, and S. J. Davis (2015). Measuring economic policy uncertainty.
Stanford University manuscript.
Barlevy, G. (2011). Robustness and macroeconomic policy. Annual Review of Economics 3,
124.
Barsky, R., A. Justiniano, and L. Melosi (2014). The natural rate and its usefulness for
monetary policy making. American Economic Review 104 (4), 3743.
Basu, S. and B. Bundick (2013). Downside risk at the zero lower bound. Boston College
Manuscript.
Bekaert, G., M. Hoerova, and M. Lo Duca (2013). Risk, uncertainty and monetary policy.
Journal of Monetary Economics 60, 771788.
Bernanke, B. S. (2005). The changing policy landscape. In Monetary Policy Since the Onset
of the Crisis, Economic Policy Symposium, pp. 122. Federal Reserve Bank of Kansas
City.
Blinder, A. (1998). Central Banking in Theory and Practice. Cambridge, MA: MIT Press.
Bloom, N. (2009). The effect of uncertainty shocks. Econometrica 77 (3), 623685.
Bomfin, A. and L. Meyer (2010). Quantifying the effects of fed asset purchases on treasury
yields. Macroeconomics Advisors, Monetary Policy Insights: Fixed Income Focus.
Born, B. and J. Pfeifer (2014). Policy risk and the business cycle. Journal of Monetary
Economics 68, 6885.
Brainard, W. (1967). Uncertainty and the effectiveness of policy. American Economic
Review 57 (2), 411425.
Campbell, J. R., C. L. Evans, J. D. Fisher, and A. Justiniano (2012). Macroeconomic effects
of federal reserve forward guidance. Brookings Papers on Economic Activity Spring, 154.
Castelnuovo, E. (2003). Taylor rules, omitted variables, and interest rate smoothing in the
US. Economics Letters 81, 5559.
56
Chevapatrakul, T., T. Kim, and P. Mizen (2009). The Taylor principle and monetary policy
approaching a zero bound on nominal rates: Quantile regression results for the united
states and japan. Journal of Money, Credit and Banking 41 (8), 17051723.
Christiano, L., M. Eichenbaum, and C. Evans (2005). Nominal rigidities and the dynamic
effects of a shock to monetary policy. Journal of Political Economy 113 (1), 145.
Clarida, R., J. Gali, and M. Gertler (2000). Monetary policy rules and macroeconomic
stability: Evidence and some theory. Quarterly Journal of Economics CXV (1), 147180.
Coibion, O., Y. Gorodnichenko, and J. Wieland (2012). The optimal inflation rate in new
keynesian models: Should central banks raise their inflation targets in light of the zero
lower bound? Review of Economic Studies 79 (4), 13711406.
DAmico, S. and T. King (2013). Flow and stock effects of large-scale treasury purchases:
Evidence on the importance of local supply. Journal of Financial Economics 108, 425448.
DAmico, S. and T. King (2015). Policy expectations, term premia, and macroeconomic
performance. Federal Reserve Bank of Chicago manuscript.
DAmico, S. and A. Orphanides (2014). Inflation uncertainty and disagreement in bond risk
premia. Chicago Fed working paper 2014-24.
Dolado, J. J., P. R. Mara-Dolores, and M. Naveira (2005). Are monetary policy reaction
functions asymmetric?: The role of nonlinearity in the phillips curve. European Economic
Review 49 (2), 485503.
Dolado, J. J., P. R. Mara-Dolores, and F. J. Ruge-Murcia (2004). Nonlinear monetary
policy rules: Some new evidence for the us. Studies in Nonlinear Dynamics and Econometrics 8 (3).
Egertsson, G. B. and M. Woodford (2003). The zero bound on interest rates and optimal
monetary policy. Brookings Papers on Economic Activity 2003 (1), 139211.
Eggertsson, G. B. and B. Pugsley (2009). The mistake of 1937: A general equilibrium
analysis. Monetary and Economic Studies 24 (S-1), 1637.
Eichenbaum, M. and J. D. Fisher (2007). Esimating the frequency of price re-optimization
in calvo-style models. Journal of Monetary Economics 54 (7), 20322047.
Engen, E. M., T. T. Laubach, and D. Reifschneider (2015). The macroeconomic effects of
the federal reserves unconventional monetary policy. Finance and Ecnomics Discussion
Series 2012-5, Board of Governors of the Federal Reserve System.
English, W. B., D. Lopez-Salido, and R. Tetlow (2013). The federal reserves framework for
monetary policy rcent changes and new questions. Federal Reserve Board, Finance and
Economics Discussion Series 2013-76.
57
Evans, C. L. (2014). Patience is a virtue when normalizing monetary policy. Text of speech
to Peterson Institute for International Economics.
Fernandex-Villaverde, J., P. Guerro-Quintana, K. Kuester, and J. Rubio-Ramrez (2012).
Fiscal volatility shocks and economic activity. Philladelphia Fed Working Paper No. 1132/R.
Friedman, M. (1969). Monetary Studies of the National Bureau. Chicago, IL: Aldine.
Fuhrer, J. C. (2000). Habit formation in consumption and its implications for monetarypolicy models. America Economic Review 90 (3), 367390.
Gagnon, J., M. Raskin, J. Remache, and B. Sack (2010). Optimal fiscal and monetary policy
with occasionally binding zero bound constraints. Federal Reserve Bank of New York Staff
Report No. 441.
Gali, J. (2008). An Introduction to the New Keynesian Framework. Princeton, NJ: Princeton
University Press.
Gali, J. and M. Gertler (1999). Inflation dynamics: a structural econometric analysis. Journal
of Monetary Economics 44, 192222.
Gerlach-Kristen, P. (2004). Interest rate smoothing: Monetary policy inertia or unobserved
variables. Contributions in Macroeconomics 4 (1), 15346005.
Gilchrist, S. and E. Zakrajsek (2012). Credit spreads and business cycle fluctuations. American Economic Review 102 (4), 16921720.
Gnabo, J. and D. N. Moccero (2014). Risk management, nonlinearity and agressiveness in
monetary policy: The case of the US Fed. Journal of Banking & Finance.
Greenspan, A. (2004). Risk and uncertainty in monetary policy. American Economic Review 94 (2), 3340.
Hamilton, J. and N. . U. Y. . . Jing Wu, TITLE = The Effectiveness of Alternative Monetary
Policy Tools in a Zero Lower Bound Environment.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time
series and the business cycle. Econometrica 57, 357384.
Hamilton, J. D., E. S. Harris, J. Hatzius, and K. D. West (2015). The equilibrium real funds
rate: Past, present and future. UCSD manuscript.
Hansen, L. and T. Sargent (2008). Robustness. Princeton, NJ: Princeton University Press.
Hansen, S. and M. McMahon (2014). First impressions matter: Signaling as a source of
policy dynamics. University of Warwick manuscript.
58
Johannsen, B. K. (2014). When are the effects of fiscal uncertainty large? Finance and
Ecnomics Discussion Series 2014-40, Board of Governors of the Federal Reserve System.
Kiesel, K. and M. H. Wolters (2014). Estimating monetary policy rules when the zero lower
bound on nominal interest rates is approached. Kiel Working Paper No. 1898.
Kiley, M. T. (2012). The aggregate demand effects of short- and long-term interest rates. Finance and Ecnomics Discussion Series 2012-54, Board of Governors of the Federal Reserve
System.
Kilian, L. and S. Manganelli (2008). The central banker as a risk manager: Estimating
the federal reserves preferences under greenspan. Journal of Money, Credit and Banking 40 (6).
Krishnamurthy, A. and A. Vissing-Jorgensen (2013). The ins and outs of lsaps. Kansas City
Federal Reserve Symposium on Global Dimensions of Unconventional Monetary Policy.
Krugman, P. R. (1998). Its baaack: Japans slump and the return of the liquidity trap.
Brookings Papers on Economic Activity Fall, 137187.
Laubach, T. and J. C. Williams (2003). Measuring the natural rate of interest. The Review
of Economics and Statistics 85 (4), 10631070.
Laxton, D., D. Rose, and D. Tambakis (1999). The U.S. phillips curve: The case for asymmetry. Journal of Economic Dynamics & Control 23, 14591485.
Mas-Colell, A., M. D. Whinston, and J. R. Green (1995). Microeconomic Theory. Oxford,
United Kingdom: Oxford University Press.
Mumtaz, H. and P. Surico (2015). The transmission mechanism in good and bad times.
Forthcoming International Economic Review.
Nakata, T. (2013a). Optimal fiscal and monetary policy with occasionally binding zero
bound constraints. Finance and Ecnomics Discussion Series 2013-40, Board of Governors
of the Federal Reserve System.
Nakata, T. (2013b). Uncertainty at the zero lower bound. Finance and Ecnomics Discussion
Series 2013-09, Board of Governors of the Federal Reserve System.
Nakata, T. and S. Schmidt (2014). Conservatism and liquidity traps. Finance and Ecnomics
Discussion Series 2014-105, Board of Governors of the Federal Reserve System.
Nakov, A. A. (2008). Optimal and simple monetary policy rules with zero floor on the
nominal interest rate. International Journal of Central Banking 4 (2), 73127.
Orphanides, A. and J. C. Williams (2002). Robust monetary policy rules with unknown
natural rates. Brookings Papers on Economic Activity Fall, 63145.
59
Reifschneider, D. and J. C. Williams (2000, November). Three lessons for monetary policy
in a low-inflation era. Journal of Money, Credit, and Banking 32 (4), 936966.
Romer, C. D. and D. H. Romer (1989). Does monetary policy matter? a new test in the spirit
of friedman and schwartz. In O. Blanchard and S. Fischer (Eds.), NBER Macroeconomics
Annual 2005, Volume 4.
Rudebusch, G. and L. E. Svensson (1999). Policy rules for inflation targeting. In J. B. Taylor
(Ed.), Monetary Policy Rules. Chicago, IL: University of Chicago Press.
Rudebusch, G. D. (2001). Is the fed too timid? monetary policy in an uncertain world. The
Review of Economics and Statistics 83 (2), 203217.
Sack, B. (2000). Does the fed act gradually? a var analysis. Journal of Monetary Economics 46 (1), 229256.
Smets, F. and R. Wouters (2007). Shocks and frictions in US business cycles: A Bayesian
DSGE approach. The American Economic Review 97 (3), 586606.
Surico, P. (2007). The feds monetary policy rule and U.S. inflation: The case of asymmetric
preferences. The Journal of Economic Dynamics & Control 31, 305324.
Svensson, L. and M. Woodford (2002). Optimal policy with partial information in a forwardlooking model: Certianty-equivalence redux. Manuscript.
Svensson, L. and M. Woodford (2003). Indicator variables for optimal policy. Journal of
Monetary Economics 50, 691720.
Taylor, J. B. (1993). Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy 39, 195214.
Tenreyro, S. and G. Thwaites (2015). Pushing on a string: US monetary policy is less
powerful in recessions. Manuscript.
Werning, I. (2012). Managing a liquidity trap: Monetary and fiscal policy. MIT Manuscript.
Williams, J. (2013). A defense of moderation in monetary policy. Macroeconomic Dynamics 38, 137150.
Woodford, M. (2003). Interest and Prices. Princeton, NJ: Princeton University Press.
Woodford, M. (2012). The changing policy landscape. In Methods of Policy Accommodation
at the Interest-Rate Lower Bound, Economic Symposium, pp. 185288. Federal Reserve
Bank of Kansas City.
60
Appendix
A
Optimal policy in the forward-looking model with uncertainty about costpush shocks
Our previous analysis assumed that the unknown shock that might trigger a binding ZLB at
time 1 is the natural real rate. We now consider the case where it is the cost-push inflation
shock u1 : i.e. nt = for t 1, and ut = 0 for all t 2, but u1 is distributed according to
the probability density function fu (.). We assume E(u1 ) = 0.
To find optimal policy, we again solve the model backward. As before, optimal policy
after time 2 is simply xt = t = 0, which is obtained by setting it = > 0. At time 1, the
ZLB may bind if the cost-push shock is negative enough. Specifically, after seeing u1 , we
solve
min
x1
1 2
1 + x21 ,
2
s.t. :
1 = x1 + u1 ,
.
x1
If u1 u1 = +
, the ZLB does not bind, and optimal policy strikes a balance
x1 =
1
If u1 < u1 , the ZLB binds, so even though the central bank would like to cut rates more
to create a larger boom and hence more inflation, this is not feasible. Mathematically,
= + u1 .
x1 =
1
To calculate optimal policy at time 0, we require expected inflation and output. These
are given by
Z u1
Z
E1 =
+ u fu (u)du +
ufu (u)du,
+ 2 u1
61
2
= P+
M,
+ 2
R u1
R u1
where P =
fu (u)du is the probability that the ZLB binds and M =
ufu (u)du. Note
M < 0 since Eu1 = 0. Expected output is similarly
Ex1 =
E1
= P+
M.
+ 2
If there was no ZLB, we would have E1 = Ex1 = 0. With the ZLB, we do worse on output
and inflation when there is a negative enough cost-push shock, and hence Ex1 < 0 and
E1 < 0.
This implies that optimal policy at time 0 is affected exactly as in the case of a natural
rate uncertainty: (i) the lower expected output gap at time 1 leads to a lower output gap at
time 0 through the IS equation; (ii) the lower expected inflation E1 leads to lower output
gap at time 0 through higher real rates; (iii) the lower expected inflation finally reduces
inflation today. All these lead to looser policy. Formally, the optimal policy problem at time
0 is, given shocks n0 , u0 , to solve
1 2
0 + x20 ,
x0 2
E1
n
,
s.t. : x0 0 + Ex1 +
0 = E1 + x0 + u0 .
min
0 =
P+
M
1+
u0 .
2
2
+
+
+ 2
If n0 0 , then optimal policy is described by
(E1 + u0 ) ,
+ 2
=
(E1 + u0 ) ,
+ 2
x0 =
0
where E1 = P +
2
M.
+2
i0 =
E1 + Ex1 + u0 + E1 + n0 ,
2
+
Proposition 3 Suppose the uncertainty is about cost-push shocks. Then: (1) optimal policy
is looser today when the probability of a binding ZLB tomorrow is positive; (2) optimal policy
is independent of the distribution of the cost-push shock tomorrow un1 over values for which
the ZLB does not bind, i.e. of {fu (u)}uu ; only {fu (u)}u<u is relevant, and only through
R u
R u
the sufficient statistics fu (u)du and ufu (u)du.
Because Ex1 and E1 now depend on P = Pr(u u ), one cannot state a general result
about mean-preserving spreads, since this probability might fall with uncertainty for some
unusual distributions. However, if u is normally distributed with mean 0, and given that
u < 0, the result that more uncertainty leads to lower rates today still hold.
An important implication is that the risk that inflation picks up does not affect policy
today. If a high u is realized tomorrow, it will be bad; however, there is nothing that policy
today can do about it. We finally present an example to illustrate our results.
Example 1 Suppose that u can take two values, u = + (with probability 1/2) and u =
(with probability 1/2). If is small, then P = M = 0, and hence E1 = Ex1 = 0, and
optimal policy is decided taking into account n0 and u0 only. If is large enough, then
+2
),
P = 1/2, M = /2, and Ex1 = 12 +
2 (which is negative since < u1 =
The value function for t 2 solves the following Bellman equation, corresponding to a
deterministic optimal control problem:
V (1 , x1 ) = min
x,
1 2
+ x2 + V (, x),
2
s.t. :
= 1 + x,
1
x = x1 (i 1 ).
We use a guess-and-verify method to show that the value function takes the form
V (1 , x1 ) =
W 2
,
2 1
and that the policy rules are linear: = g1 and x = h1 for two numbers g and h. To
verify the guess, solve
1
1
min (1 + W ) (1 + x)2 + x2
x 2
2
63
(1 + W )
1 ,
(1 + W )2 +
leading to
=
1 ,
(1 + W )2 +
which verifies our guess of linear rules. To find W , plug this back in the minimization
problem; we look for W to satisfy, for all 1 , :
1
W 2
1 = (1 + W )
2
2
(1 + W )2 +
2
2
2 1
1
+
2
(1 + W )
(1 + W )2 +
2
2
2 1
Proof of Proposition 2
We start with a simple more general result, then we show how our model fits as a special
case of this result.
Lemma 1 Consider the problem
V () = max E J(x0 , ),
x0
s.t.
x1 f (x0 ) + ,
where F is quadratic (with F11 < 0) and f is linear. Suppose that higher indexes more
risky distribution of in the sense of second-order stochastic dominance. Suppose that the
64
scalar F13 + F11 < 0 and that the scalar f (F11 + F13 ) + F21 1 +
increasing in .
F13
F11
< 0. Then, x0 is
Proof. For a given distribution of , i.e. a given , the optimal x0 satisfies the first-order
condition
E J1 (x0 (), ) = 0.
It is straightforward from the implicit function theorem that
R +
J (x (), )h (, )d
dx0
1 0
,
= R +
d
J (x (), )h(, )d
11 0
and the denominator is negative by the second-order condition. Given that higher indexes
more risky distribution, the numerator will be positive if the function J1 is convex in ; we
will prove this which demonstrates our result.
To prove that J1 is convex in , we first calculate J. Define the unconstrained maximum
x1 (x0 , ) = arg max F (x1 , x0 , ).
x1
Note that J1 is continuous in since at the boundary between the two expression, F1 (f (u0 ) + , u0 , ) =
F1 (u1 (u0 , ), u0 , ) = 0 by optimality of u1 .
65
F11
= f 0 F11 + f 0 F13 + F21 + F23 for < .
J1 = F21
Wt (t1 , xt1 , , u) =
s.t.
min
t ,xt ,it
1 2
t + x2t + E0 Wt+1 (t , xt , 0 , u0 ),
2
:
1
(it t1 ) ,
= t1 + xt + u,
0.
xt = xt1
t
it
t t1 ut
,
+ t1
,
or
t t ,
where
t1 t2 ut1
= t1 +
+ ( + t1 ) + ut
= ++
t1 + t2 ut1 + ut
2
2
Wt (t1 , t , , u) = min
t + 2 (t t1 u) + E0 | Wt+1 (t , t+1 , 0 , u0 ),
t 2
66
s.t. :
t t ,
t+1 = + +
t + 0 t1 u + u0 .
We can simplify this further given our specific scenario. Given that there is no uncertainty
for t 2 and that the ZLB constraint does not bind, the value function is simply
1
2
2
W (t1 ) = min
t + 2 (t t1 ) + W (t ).
t 2
W 2
.
2
The value function at time t = 1 must take into account that the ZLB may bind. We call
this value V :
W
1
2
2
V (0 , 1 , u1 ) = min
1 + 2 (1 0 u1 ) + 12 ,
1 2
2
s.t. : 1 1 .
Finally, the time 0 problem is
1
2
2
U (1 , u0 ; ) = min
0 + 2 (0 1 u0 ) + E1 ,u1 V (0 , 1 , u1 ),
0 2
0 + 1 1 u0 + u1 ,
s.t. : 1 = + +
where indexes the distribution of either n1 or u1 . Note that once we have solved for 0 ,
we can find x0 = 0 1 u0 and i0 = + 1 + (x1 x0 ) immediately. Hence a higher
(lower) 0 implies a higher (lower) x0 and lower (higher) i0 .
To map our problem in the formulation of the lemma, we first consider the case where
the uncertainty is over natural rate shocks (so u1 is known). In this case, we define
W
1
2
2
2
2
F (1 , 0 , ) =
0 + 2 (0 1 u0 )
1 + 2 (1 0 u1 ) 12 ,
2
2
and
f (0 ) = + +
0 1 u0 + u1 .
67
where
J(0 , ) = max F (1 , 0 , )
1
s.t.
1 f (0 ) + ,
with = 1 . Clearly F is quadratic and f is linear. We have F13 = 0 so F11 + F13 < 0 is
satisfied, and
F13
0
= f 0 F11 + F12 = + +
(W + 1) 2 +
< 0,
f (F11 + F13 ) + F21 1 +
F11
f (0 ) = + +
0 1 u0 + 1 ,
and = u1 (and assume 1 is known). We now need to verify the two conditions. First,
Second,
F13
f (F11 + F13 ) + F21 1 +
,
F11
W + 1
(W + 1) + 2
= ++
W + 1 + 2
W + 1
(W + 1) + 2
< ++
2
< +
(W + 1) < 0.
We present here the numerical methods used to solve the forward looking and the backward
looking model. In both cases, we make the following assumptions regarding exogenous
variables. First, there is a date T such that, for t T, the cost-push shock is zero and the
natural rate is constant, ut = 0 and nt = . Second, for t < T, the cost-push shock ut follows
a Markov chain with transition probability Pu (u0 |u). The natural rate nt is the sum of a
deterministic component and a Markov chain: nt = ft + t , where t has transition P (0 |),
and ft is increasing and satisfies fT = . We will write nt () = ft +. The stochastic processes
t and ut are independent. In practice we use simply ft = n0 + Tt0 ( n0 ) for 0 t T0
68
and ft = for T0 t < T. We will choose the Markov chains for and for u to each
approximate an AR(1) process using the Rouwenhorst method. Our Matlab(r) code will be
made available on the following website: https://sites.google.com/site/fgourio/
D.1
it 0.
Our theoretical analysis assumed for simplicity (and as is common in the literature) a
zero inflation steady-state. To provide more useful numerical illustrations, we consider the
case of a positive inflation target. We assume that the equations above apply if t is inflation
deviation from target and it is the nominal rate minus the inflation target. The ZLB is then
def
modified as it Z = .58
D.1.1
Optimal policy under discretion can be easily calculated in this model. For t T, we have
xt = t = 0. For t < T, the optimal policy is given by the solution to
Lt (, u) = min
it Z
1 2
t + x2t + Et Lt+1 (0 , u0 )
2
s.t. :
t = Et t+1 + xt + u,
1
xt = Et xt+1 (it nt () Et t+1 ) ,
where future expectations Et t+1 and Et xt+1 are taken as given. Since the current decision
for it does not affect the future loss Lt+1 , the optimal choice is found by simply minimizing
t2 + x2t .
58
One technical issue is that the long-run Phillips curve is not vertical in this model. To make sure that
is indeed the long-run inflation when there is no uncertainty, we assume that the true model is
t = Et t+1 + (1 ) + xt + ut ,
and the IS curve is unchanged. The policymaker objective is to minimize the expected discounted sum of
2
(t t ) + x2t . We can then redefine
et = t and eit = it . The model is now exactly the one
written above. Our modification of the Phillips curve is minimal since (1 ) is a very small number. We
make the same assumption in the backward-looking model.
69
Denote
at (, u) = Et (xt+1 |t = and ut = u) ,
and
bt (, u) = Et (t+1 |t = and ut = u) ,
and define Xt (, u) = 1 if ZLB binds at time t in state (, u), and 0 if not.
Suppose first that the ZLB does not bind; taking first-order conditions then yields
(Et t+1 + u) =
(bt (, u) + u) ,
2
+
+ 2
(Et t+1 + u) =
(bt (, u) + u) ,
tnb (, u) =
2
+
+ 2
n
inb
t (, u) = t () + bt (, u) + (at (, u) xt (, u)) .
xnb
t (, u) =
If this solution is feasible, then it is clearly the optimum. If this solution is not feasible, then
the optimum is simply to set the nominal interest rate to zero. Hence, the ZLB binds if the
nominal interest rate required to implement that solution is negative, i.e.
n
(bt (, u) + u) Z.
Xt (, u) = 1 if t () + bt (, u) + at (, u) +
+ 2
In that case, the solution is:
Z nt ()
Z nt ()
Et t+1
bt (, u)
=
+ Et xt+1 +
=
+ at (, u) +
,
!
Z nt ()
bt (, u)
zlb
+ at (, u) +
+ bt (, u) + u,
t (, u) =
xzlb
t (, u)
izlb
t (, u) = 0.
To solve for the optimal path, we only need to know at (, u) and bt (, u). We can solve
for these recursively. We have aT 1 (, u) = bT 1 (, u) = 0 for all , u, since xT = T = 0. To
update the recursion, we write
at (, u) = Et (xt+1 |t = , ut = u)
X
0
0
0
0
nb
0
0
=
P (0 |)Pu (u0 |u) Xt+1 (0 , u0 )xzlb
t+1 ( , u ) + (1 Xt+1 ( , u )) xt+1 ( , u ) ,
0 ,u0
and
bt (, u) = Et (t+1 |t = , ut = u)
70
zlb
nb
P (0 |)Pu (u0 |u) Xt+1 (0 , u0 )t+1
(0 , u0 ) + (1 Xt+1 (0 , u0 )) t+1
(0 , u0 ) .
0 ,u0
t t2 + x2t = 0,
t=T
0 ,u0
D.1.2
and the difficulty is that which formula applies for the interest rate depends on the value
of inflation and the output gap, which themselves depend on the interest rate. However it
is easy to solve the model by backward induction, in a way roughly similar to the optimal
policy calculation above. For t T, it = g > 0, and the equilibrium is
= g ,
1
x =
.
In particular, if g = , the terminal state is x = = 0. (This case is the outcome for cases
(a) and (b) but not necessarily for case (c), depending on whether b = .)
71
Use a superscript W to denote the outcomes with this rule. Define again
W
|
=
and
u
=
u
,
=
E
x
aW
t
t
t
t+1
t
W
W
bt = Et t+1 |t = and ut = u ,
and note that
W
tW (, u) = bW
t (, u) + u + xt (, u)
1
n
W
W
xW
max Z, gt (t ) + tW (, u) + xW
t (, u) = at (, u)
t (, u) t (t ) bt (, u) .
W
W
W
n
W
xt (, u) =
at (, u)
bt (, u) + u
gt (t ) t (t ) bt (, u) ,
1 + +
and tW (, u) can be obtained from the equation above. We can now check if indeed gt (t ) +
tW (, u) + xW
t (, u) > Z is satisfied. If it is not, we then look for a solution at the ZLB,
i.e.
W
tW (, u) = bW
t (, u) + u + xt (, u)
1
1
W
Z nt (t ) + bW
(, u),
xW
t (, u) = at (, u)
t
59
and we check that with this solution, gt (t )+tW (, u)+xW
Given the value of
t (, u) < Z.
W
W
W
W
t (, u) and xt (, u) for all , u, we can update at1 (, u) and bt1 (, u) and hence proceed
backwards until time 0. We can furthermore calculate the loss function in the same way as
for optimal policy.
D.2
The model is
t = t1 + xt + ut ,
and
xt = xt1
1
(it nt (t ) t1 ) .
As in the forward-looking case, we assume that this model applies to deviations of inflation
from the target, and it is the difference between the nominal rate and the inflation target .
(Formally, this can be justified as in the previous footnote.) As a result, we have the ZLB
constraint it Z = .
59
In principle, it is possible that either none, or both solutions exist, but we never encountered this case
in our calculations.
72
D.2.1
The optimal policy under discretion can be set up using a Bellman equation:
Vt (x1 , 1 , , u) = min
iZ
1 2
+ x2 + E0 ,u0 |,u Vt+1 (x, , 0 , u0 ),
2
s.t. :
= 1 + x + u,
1
x = x1 (i nt () 1 ) .
We first solve for the value in the final steady-state, VT (x1 , 1 ); we have a closed form
solution if the ZLB does not bind for all values of x, (see appendix B); or we can solve it
numerically using the Bellman equation
VT (x1 , 1 ) = min
i0
1 2
+ x2 + VT (x, ),
2
s.t. :
= 1 + x,
1
x = x1 (i 1 ) .
For t < T , we solve numerically the Bellman equation above. For simplicity, we assume
that only a discrete set of interest rates is allowed, call it G = {i1 , ..., iN } . We then solve
this Bellman equation by interpolating the value functions around a grid for x and for .
Specifically, at time t, and for each value of x and in these grids, we calculate the payoff
of using any given interest rate i G today, and select the optimal one. This may require
us to interpolate to find the expected future value; we use a linear interpolation. This solution method produces the optimal policy it (x1 , 1 , , u) and the output gap and inflation
xt (x1 , 1 , , u) and inflation t (x1 , 1 , , u) as well as the loss function Vt (x1 , 1 , , u)
for all points in the grid. We then move to on to period t 1, and so on until time 0.
D.2.2
We can also calculate the equilibrium in this model under a rule of the form
it = max (0, gt (t ) + t + xt ) .
Specifically, given xt1 and t1 and the values of t , ut , we must solve the system:
t = t1 + xt + ut ,
xt = xt1
1
(max (0, gt (t , ut ) + t + xt ) nt (t ) t1 ) ,
73
and so we need to consider the two possible cases to find the solution. Either gt (t ) + t +
xt > 0, in which case
1
1
n
xt =
t1 ut + xt1 (gt (t , ut ) t (t ) t1 ) ,
1 + +
and t = t1 + xt + ut ; and we need to verify that indeed gt (t ) + t + xt > 0; or we
have
1
(nt (t ) t1 ) ,
t = t1 + xt + ut ,
xt = xt1
Deflationary traps
An important issue in this model is the risk of deflation trap (Reifschneider and Williams
(2000)). For given parameters, there is a set of initial values x1 and 1 that diverges
to even under optimal policy, at least for some shock realizations. Mechanically, this
arises because if the output gap is negative and is large enough, inflation will fall; and
the output gap will likely fall is is large enough and/or the natural rate or inflation are
negative enough. Hence, low inflation and output gap can be self-reinforcing. These deflation
traps capture an economically meaningful mechanism, but obviously the divergence to is
extreme. In reality, it seems more likely that the divergence would stop at some point due to
a regime change in the way policy, expectations, or price setting is determined. For instance,
fiscal policy might step in at some point and ensure that the deflation does not perpertuate
itself. In our solution method, we impose this - i.e. there is a worst possible outcome,
for inflation and x for the output gap, which caps inflation and output gap and hence
prevents the divergence to . Obviously, our simulations start from initial conditions such
that the deflation trap can be avoided by appropriate policy, so the assumptions regarding
the deflation trap are not key to our results. However, policy in this model is also motivated
by the desire to prevent the economy from falling under a deflation trap should a negative
sequence of shocks arise, and for some parameters this can have a significant effect to increase
the buffer stock i.e. stay with inflation and output gap above target persistently.
74