Notes Oxford 06
Notes Oxford 06
Notes Oxford 06
1 There is little to nothing original in these lecture notes; they draw heavily on published work by others, on lecture notes I have studied as a student and on my own research. I thank Fabian Eser for reading a previous draft and making comments, John Bluedorn, Chris Bowdler and Roland Meeks for discussions about organizing the material and Fabio Ghironi for sharing his experience in teaching macroeconomics to PhD students.
ii
Contents
1 Preliminaries 1 1.1 Why care? Aim of the course . . . . . . . . . . . . . . . . . . . . 1 1.2 What are we trying to explain? Stylised facts . . . . . . . . . . . 2 1.3 Solving a dynamic optimization problem in 2 minutes (Refresher?) 5 1.3.1 Euler Equations in the Deterministic Case . . . . . . . . . 5 1.3.2 Euler Equations in the Stochastic Case . . . . . . . . . . 7 1.4 A primer on asset pricing . . . . . . . . . . . . . . . . . . . . . . 7 2 The 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Benchmark DSGE-RBC model 11 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Planner (centralised) economy . . . . . . . . . . . . . . . . . . . . 12 Competitive equilibrium (or decentralising the planner outcome) 14 Welfare theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Functional forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Steady-state and conditions on preferences . . . . . . . . . . . . . 17 Loglinearisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.7.1 Intermezzo on loglinearisation . . . . . . . . . . . . . . . . 19 2.7.2 Loglinearising the planner economy . . . . . . . . . . . . . 20 2.7.3 Loglinearising the competitive economy . . . . . . . . . . 23 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Solving the model . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.9.1 Solving forward and backward: (local) stability, indeterminacy and equilibrium uniqueness . . . . . . . . . . . . . 28 2.9.2 Elastic labor . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.9.3 Discussion of solution in general case. . . . . . . . . . . . 32 Welfare analysis - a primer . . . . . . . . . . . . . . . . . . . . . 32 Evaluating the models performance . . . . . . . . . . . . . . . . 33 2.11.1 Measurement of technology . . . . . . . . . . . . . . . . . 33 Impulse responses and intuition . . . . . . . . . . . . . . . . . . . 35 2.12.1 The role of labor supply elasticity . . . . . . . . . . . . . 36 2.12.2 The role of shock persistence . . . . . . . . . . . . . . . . 39 Second moments . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 What have we learned? . . . . . . . . . . . . . . . . . . . . . . . 44 2.14.1 Critical parameters . . . . . . . . . . . . . . . . . . . . . . 44 iii
2.8 2.9
2.13 2.14
iv
CONTENTS 2.14.2 The Solow residual . . . . . . . . . . . . . . . . . . . . . . 2.14.3 Enhancing the propagation/amplication mechanism . . . 2.14.4 The role of variable capital utilization . . . . . . . . . . . 2.14.5 Where do we go . . . . . . . . . . . . . . . . . . . . . . . 2.14.6 Government Spending Shocks . . . . . . . . . . . . . . . . 2.15 The single most embarassing prediction of the frictionless model: The Equity Premium Puzzle (Back to Asset Pricing). . . . . . . 45 46 47 49 49 50
A Matlab programmes 55 A.1 Matlab code for the basline, elastic-labor model . . . . . . . . . . 55 A.2 Matlab code for variable utilization model . . . . . . . . . . . . . 59
Chapter 1
Preliminaries
1.1 Why care? Aim of the course
This course is meant to achieve three objectives: 1. familiarize you with the way modern macroeconomics attempts to explain business cycle uctuations, i.e. (co-) movement of macroeconomic time series, by using economic theory. 2. along the way to 1., learn state-of-the art modelling techniques, or tools that are of independent theoretical interest. 3. develop your economic intuition, which may easily seem of secondary importance when one struggles with 2; this is a trap we will try to avoid. In a nutshell, we will use dynamic stochastic general equilibrium (DSGE) models in order to understand business cycles, i.e. the comovement of macroeconomic time series. Why do we need this framework in the rst place? Why dynamic? Real-world economic decisions are dynamic: just think of the consumption-savings decision, accumulation of wealth, etc. Why stochastic? The world is uncertain, and this fundamental uncertainty may be the source of macroeconomic volatility. There is a huge and endless debate as to what exactly are the sources of uncertainty, which we will touch upon. Why general equilibrium? GE theory imposes discipline. Agents decisions are interrelated (the decision to consume by households interacts with households decision to invest in any available assets and devote time to work, but also to the decision of rms to supply consumption goods, and to employ factors -such as labor and capital- to produce these goods). These meaningful interactions take place in markets, and (under perfect competition at least) there will exist prices that make these markets clear. This begs questions about market power, the non-clearing of some markets, price setting, and various other frictions. On these issues (i.e. whether such frictions are important or not) there is again an enormous and endless debate that I will try to touch upon. 1
CHAPTER 1. PRELIMINARIES
Most of the theory we will develop will concern the frictionless model in which technology shocks are regarded as the main source of uctuations: the real business cycle (RBC) model originally due to Kydland and Prescott (1982)1 . This is not to say that we should take this model at face value and refuse the idea of frictions (though many people do so), but because it constitutes an useful benchmark. I.e. we want to see how far can we go in explaining uctuations by using the frictionless model and, importantly, what is it that we cannot explain, and which assumptions could we relax in order to explain which puzzling fact. We also want to have a vehicle for understanding basic theoretical concepts, develop our intuition and, importantly, talk about welfare. Importantly, you should remember throughout this course that the techniques you learn here are being used in most branches of macroeconomics, pretty much independently of ones view regarding the presence -or lack thereof- of frictions (such as market power in the goods or labor markets, distortionary taxation, etc.) and the source uctuations. To give just one prominent example, modern monetary policy analysis also uses DSGE models that nest the baseline frictionless models as a special case. These models2 , which you should see later in the year, augment the benchmark model by incorporating imperfect price adjustment and monopolistic competition. Two important implications of this are that, in contrast with the benchmark RBC model, 1. monetary policy inuences the real allocation of resources; 2. other shocks than technology can play a crucial role in explaining uctuations. A good starting point for understanding this literature is the excellent book Interest and Prices by Michael Woodford (2003). Read Lucas (2005), Review of Economic Dynamics.
1.2
Stylised
READ King and Rebelo Section 2 and/or Cooley and Prescott Section 6. The theory presented here attempts to explain business cycle uctuations, i.e. movement of macroeconomic variables around a trend in which variables move together. This trend can be thought of as the balanced growth path, i.e. the equilibrium of the growth models which you have seen previously with Dr. Meeks. Business cycle theory uses the same model to study short-term uctuations. It does this by documenting some statistics for macroeconomic time series and then building an articial economy (a business cycle model) in order to replicate them. Importantly, since stylised facts pertaining to both
1 Something that is often overlooked or forgotten is that Kydland and Prescott did not try to show that real, technology shocks can explain the bulk of uctuations. Indeed, their original model also featured nominal rigidities (wage stickiness) that allowed nominal disturbances to have real eects. The nding that technology shocks accounted for most of the observed uctuations emerged from this larger model and was not imposed a priori. 2 Dubbed by some New Keynesian, by others Neo-monetarist, Neo-Wicksellian, etc.
growth (long-term movements) and business cycles (short-term movements) are derived using the same data, business cycle theory consists of building one model that can explain both types of data features. An important (and not free of consequences) step in analysing the data consists of deciding just how to extract the business-cycle information from a dataset, i.e. how to eliminate the trend. This is a whole industry, going from simply taking rst-dierences of logs of the raw data or using the HodrickPrescott (HP) lter to more sophisticated methods. The HP lter was rst used by ... Hodrick and Prescott in 1980 to study empirical regularities in business cycles in quarterly post-war US data.
Source: KR (1999) t from which I briey review what this method does. Consider a time series X we want to extract the cyclical information. First take logs of this series (unless t . The HP lter decomposes this into it is expressed as a rate), Xt = ln X C G a cyclical component Xt and a growth or secular (or trend) component Xt , where the latter is a weighted average of past, present and future observations. The cyclical component is hence:
C G Xt = Xt Xt = Xt J X
j =J
j Xtj
The growth component is calculated by solving the optimization problem: min G T }0 {Xt
T T X X C 2 G G 2 G G + , Xt Xt+1 Xt Xt Xt 1 t=1 t=1
CHAPTER 1. PRELIMINARIES
where is the smoothing parameter, whose conventionally chosen value for quarterly data is 1600. Note that when we get a linear trend as the growth component, while for = 0 the growth component is simply the series. For our purposes it is enough to work with the stylised facts borrowed from King and Rebelo (1999) (which are in turn based on an extensive study by Stock and Watson in the same volume of the Handbook of Macroeconomics). Table 1: Business Cycle Statistics for U.S. Economy Variable x x x / y E [xt xt1 ] corr (x, y ) y 1.81 1.00 0.84 1.00 c 1.35 0.74 0.8 0.88 i 5.30 2.93 0.87 0.80 l 1.79 0.99 0.88 0.88 Y /L 1.02 0.56 0.74 0.55 w 0.68 0.38 0.66 0.12 r 0.30 0.16 0.60 -0.35 A 0.98 0.54 0.74 0.78 Source: King and Rebelo, 1999 Most macroeconomists would know the main features of uctuations by heart. Here are the main cyclical properties (i.e. co-movement of selected series with total output YtC ):
C is less volatile than output 1. consumption Ct C is much more volatile than output (about three times) 2. investment It
Two additional observations, useful when we will consider variations of the baseline RBC model, are: (related to 4 above) although the capital stock is much less volatile than output, capital utilization is more volatile than output. (related to 3 above) hours per worker are much less volatile than output, suggesting that variations in total hours are accounted for by the extensive margin (employment). In addition, all aggregates display substantial persistence as judged by the rst-order autocorrelation.
C C C var Kt var WtC < var Ct < var YtC var LC << var It t
1.3
Disclaimer: this part just gives you main intuition behind Euler Equations; a mathematician may faint. The main idea behind calculus of variations is: suppose you have to solve an optimisation problem, i.e. have to nd some optimal path for your control variable in order to maximise (minimise) an objective function. The idea is that at any point (period) along that optimal path, a variation in that periods control (action) would reduce the overall payo. In dierential terms, it means that dierentiating your objective function along the optimal path with respect to your state variable, you should get ZERO. Again, following the path is always better than deviating. If you know the other two main approaches to dynamic optimisation (Pontryagin and Bellman) you may think of a way to relate the three.
1.3.1
To see the necessary rst order conditions look at a feasible one-period deviation from the conjectured optimal path X0 , X1 , ..., Xt 1 , X, Xt+1 , ... . Feasible means es the constraint correspondence t, more specically it satis X t1 Xt , X 1 t+1 t (X ) - rest is trivial, as an optimal path is by denition feasible. Now by optimality of 1.2, the following holds X feasible (if you do not see this directly, note that all other terms in the intertemporal objective function X Ut (Xt , Xt+1 ) are the same as we consider a one-shot variation):
t=0
Note that the functional form of the utility function is time-variant, encompassing discounting, i.e. Ut (.) = t U (.) and the constraint correspondence is also time-variant. For simplicity, suppose you have to choose directly tomorrows state Xt+1 - you can always substitute out for your control variable (see example below). We will have some additional constraints but I introduce these later, so you see when they are needed. Now suppose you have an optimal solution (path): {Xt }1 = X0 , X1 , ..., Xt (1.2) 1 , Xt , Xt+1 , ...
Now make two additional assumptions to be able to write this in the dier entiable case: (i) Ut (.) dierentiable; (ii) {Xt }1 interior solution, t, i.e. Xt , Xt Int ( ) . The inequality 1.3 implies that the function U , X + X t t1 +1 t1
CHAPTER 1. PRELIMINARIES
Ut X, Xt +1 attains a maximum at X = Xt . Because we assumed dierentiability and that te maximum is interior, the derivative of this function evaluated at X = Xt has to be zero. Hence we get the necessary condition (where Di U (.) denotes U s derivative or gradient with respect to its ith argument): D2 Ut1 Xt (1.4) 1 , Xt + D1 Ut Xt , Xt+1 = 0, t This is our friend the Euler Equation. Our best friend will be 1.4 in the discounting case, which reads: (1.5) D2 U Xt 1 , Xt + D1 U Xt , Xt+1 = 0, t
It is straightforward to show that in the particular concave case (U concave }1 interior, the Euler equation is both necessary and on convex), and for {Xt sufcient for an optimum. I.e., {Xt }1 is optimal if and only if 1.4 is satised. Note that 1.5 gives you nally a second-order dierence equation Xt +1 = F Xt1 , Xt from which you get an explicit form for the optimal path - note this holds for all times t. Typically in the models we consider during this course we do not have to solve for these explicitly, as the models do not have a closed-form solution anyway overall, but you should maybe read on. The bad news is that such a (family of) dierence equation(s) has a multiplicity of solutions. But the good news is something you might have forgot: we have boundary conditions to help us choose amongst these. Do not get too excited, though: at rst glance we just have the initial condition, an initial value for Xt , and you might know that for a second order equation we need two conditions to pick up the unique path. Hence up to now we pinned down a one-parameter family of paths. The other condition that helps us here is the transversality condition at innity. See the example below for an illustration. Example 1 Cake-eating. Suppose a consumer has a xed endowment X0 and the only thing he can do with it is either eat it or save it for future consumption. His consumption in period t is what he has today minus what he saves to eat from tomorrow onwards. Hence the constraint Xt+1 t (Xt ) is now Xt Xt+1 0 Ct = Xt Xt+1 U (Ct ) = U (Xt Xt+1 ) (1.6)
Consider there is discounting at . This is then a particular case of the problem considered before: the Euler Equation, by 1.5 is d d U (Xt1 Xt ) + U (Xt Xt+1 ) = 0t dXt dXt U 0 (Xt1 Xt ) + U 0 (Xt Xt+1 ) = 0 t, U 0 (Ct1 ) = U 0 (Ct ) When utility is logarithmic, as is considered many times in our applications we get Ct = Ct1 t. Do not be mislead: this is indeed not a second-order but a rst-order dierence equation, but it is expressed in the control, not the state.
Replacing back consumption as function of the state you get a second-order equation. We have no initial value for consumption, we just know the initial value of the cake!!! Hence we still need a transversality condition, which in this X case is: Ct = X0 . Then you can trivially solve the dierence equation t=0 in consumption and nd the optimal path explicitly. In terms of the state, the solution is: Xt = X0 t t This makes sense: you eat a constant fraction of the cake each period, your total consumption over the innite horizon never hits the limit (there is always some pie around) and more impatient you are faster you eat (faster the pie shrinks). Note two last things: in the way the problem has been set up I used sup and not max: sometimes a maximum is not achieved, make sure to check this in your own research. Secondly, when you have a minimisation problem you can obviously treat it as maximising the negative of the original objective function.
1.3.2
where is a general stochastic shock, capturing the state of the world. Then Et1 is the expectation operator at t 1, and how you dene this is part of the long story. Now your optimal path will tell you what to do at any given time, AND for any given state of the world: it will be a state-contingent plan. For example, if you have to eat a stochastic cake (i.e. the size of your cake shrinks or expands each period stochastically by a multiplicative shock zt which has to satisfy some nice properties), you have the Euler Equation: U 0 (Ct1 ) = Et1 [zt U 0 (Ct )]
Let me now be very sloppy and merely say what happens in the stochastic case informally, as doing it formally would imply I should introduce other concepts like some measure theory, Lebesgue integrals, Markovian operators, etc - if you are really keen on this see the book by Stokey, Lucas with Prescott3 . Generally, our equation for the discounting case 1.5 would modify like D2 U Xt =0 (1.7) 1 , Xt , t + Et1 D1 U Xt , Xt+1 , t
1.4
The purpose of this section is merely to introduce some very basic ideas about pricing assets. The aspiring macroeconomists amongst you should really read Chapter 13 (and Chapter 10) of Ljungquist and Sargent and John Cochranes book Asset pricing.
3 If you read Italian, I warmly recommend: Montrucchio, L. and Cugno, F. (2000), Scelte intertemporali: teoria e modelli , Carocci Editore, Roma.
CHAPTER 1. PRELIMINARIES Suppose that a consumer maximizes expected lifetime utility: u ({Ct }t=0 ) =
X t=0
t Et U (Ct )
and can invest in some arbitrary asset that costs her Pt today and leads to the payo Qt+1 tomorrow (for example, it can be re-sold tomorrow on the market at price Pt+1 and also pays some dividend Dt+1 , such that Qt+1 = Pt+1 + Dt+1 ). Let us assume that the number of assets that the consumer demands (we denote demand by superscript d) is Ntd . The budget constraint is:
d Pt Ntd +1 + Ct = Qt Nt + Wt L,
(1.8)
where we assume that the consumer receives some labor income Wt L, where Wt is the wage and L the xed (for now) amount of time spent working. Note that Pt Ntd +1 is the value of assets purchased during period t and carried over to period t + 1, while Qt Ntd is the total payo of assets brought into period t. Therefore, Ntd is a state variable. You can maximize utility with respect to the number of demanded assets subject to this dynamic constraint using the techniques you learned. Rewrite the budget constraint as follows: Ct = Qt Ntd + Wt L Pt Ntd +1 , and substitute into the objective function: sup
X t=0
t Et U Qt Ntd + Wt L Pt Ntd +1
Assuming that there is an interior, strictly non-zero demand for these assets (i.e. there are no borrowing or liquidity constraints, no transaction costs, and no short-sale restrictions), you obtain the Euler equation at t for assets by dierentiating with respect to Nt+1 (which can be regarded as todays control): t Pt UC (Ct ) + t+1 Et [Qt+1 UC (Ct+1 )] = 0 or: UC (Ct ) = Et Qt+1 UC (Ct+1 ) Pt (1.9)
The cost -in utility units- of decreasing consumption today has to be equal to the expected benet tomorrow; this benet is given by the gross return of investing in the asset, transformed in tomorrows utils and discounted by . Rearrange the Euler equation to obtain the fundamental asset pricing formula: Pt where t,t+1 = Et [t,t+1 Qt+1 ] , UC (Ct+1 ) . UC (Ct ) (1.10) (1.11)
t,t+1 is the stochastic discount factor, or pricing kernel, or marginal rate of substitution (or, if thats not enough: change of measure, or state price density). It governs the rate at which the consumer is willing to substitute between consumption at t and at t + 1. By looking at 1.9 and 1.10 (which are really two facets of the same medal), you can understand a distinction that is at the core of modern macroeconomics4 . 1. Researchers looking at consumption behaviour typically treat asset returns Qt+1 /Pt as given (or exogenous) and use 1.9 to derive the implications for consumption, compare this to data, etc. 2. Finance researchers typically take discount factors t,t+1 (or, equivalently, marginal utility) as given and use 1.10 to look at the behaviour of asset prices. 3. People who use general equilibrium models take none of these objects as given, but regard all of them as being jointly determined in equilibrium. For somebody who uses this approach, it makes no dierence whether one uses 1.9 or 1.10. Otherwise put, since all variables are treated as endogenous, either way of writing the equation is equally legitimate. You can of course run the mental experiment what happens to asset prices if the path of consumption is such and such or the other way around (what is the optimal consumption path if asset returns are such and such). But keep in mind that this is just a way to sometimes help the intuition. Note that the above derivation has not imposed any market structure, any nature of assets (they can be anything) and any particular structure for uncertainty. Indeed, in many cases (i.e. for many types of assets and/or market structures), once you impose equilibrium (and hence market clearing) you will d nd: Ntd +1 = Nt , and you can normalize this to 1 - but we should not take this route here. Let us now look at some examples for specic assets. Example 2 For stock (shares) the payo is given by the market price plus the dividend: Qt+1 = Pt+1 + Dt+1 . The pricing formula becomes: Pt = Et [t,t+1 (Pt+1 + Dt+1 )] Dene the stochastic discount factor between time t and t + j : t,t+j = j UC (Ct+j ) = t,t+1 t+1,t+2 ... t+j 1,t+j UC (Ct ) (1.12)
Using this and the law of iterated expectations, you can iterate 1.12 forward to obtain the price of a share as the discounted present value of future dividends: Pt = lim Et t,T PT + Et
T X j =1
t,t+j Dt+j = Et
X j =1
t,t+j Dt+j ,
4 This distinction is much less operational now that it was say 25 years ago, as most literature now falls in the third category; but it is still useful to x ideas.
10
CHAPTER 1. PRELIMINARIES
where the second equality has used transversality. This is esentially a strippeddown version of Lucass tree model in Asset prices in an exchange economy, 1978, Econometrica. P Exercise 3 How many summation operators are in the equation Pt = Et j =1 t,t+j Dt+j ? What are the dimensions along which we are doing the summation?
A Example 4 We can dene net returns of any asset Rt +1 from the payos of an asset with price 1, i.e. if I invest one unit of consumption or currency today, how many units of consumption or currency do I get tomorrow. So A 1 + Rt +1
Qt+1 , Pt
Example 5 An important special case occurs for the risk-free rate 1 + Rt , i.e. the return of an asset (a riskless bond) that gives you something tomorrow with certainty: 1 = Et [t,t+1 (1 + Rt )] = (1 + Rt ) Et [t,t+1 ]
(1 + Rt )1 = Et [t,t+1 ] Alternatively, you can think of an asset whose payo tomorrow is 1 with cer1 tainty, and whose price by the above formula is (1 + Rt ) . Such an asset is called a discount bond. More generally, you can dene the riskless rate by this formula even when a riskless asset is not being traded or does not exist. Example 6 Physical capital. Consider a physical asset that is constituted by the same good as the good that is consumed (also, the decision to invest is reversible: this asset can be transformed back into the consumption good freely); therefore, its price in units of consumption is 1 . However, the household can decide that instead of consuming, it can put this aside and use the accumulated stock e.g. K as an input in production, activity that yields the household Rt units of the consumption good (for instance, think of this as a rental rate). Once it does so, it needs to accept that some of the stock will deplete over time, i.e. depreciate, say at the exogenous rate . Therefore, the dividend net of depreciation is K K Rt , and hence the payo tomorrow is 1 + Rt +1 . The Euler equation is: K 1 = Et t,t+1 1 + Rt +1
K (Because the price is 1, Rt is also the net return of this asset.) This example naturally creates the link with the model we study next. (You will see models in which the price of capital, related to Tobins q , is allowed to vary when you study models with investment adjustment costs.)
Chapter 2
2.1
Environment
where (0, 1) is the discount factor, and U (., .) , the momentary felicity function, is continuously dierentiable in both arguments and increasing and concave in C and decreasing and convex in L (i.e. increasing and concave in leisure).
1 For a business-cycle version of an endogenous growth model la Romer-GrossmanHelpman see Bilbiie, Ghironi and Melitz (2006).
We assume that the economy is populated by an innite number of atomistic households who are identical in all respects. Preferences of these households, dened over consumption Ct and hours worked Lt , are additively separable over time: X t U (Ct , Lt ) , u ({Ct , Lt } t=0 ) =
t=0
11
12
The technology for producing the single good of this economy Yt is described by the production function: Yt = At F (Kt , Lt ) , where Kt is the stock of capital and Lt is labor. F : R /+ R / + is increasing in both arguments, concave in each argument, continuously dierentiable and homogenous of degree one. Moreover, F (0, 0) = F (0, Lt ) = F (Kt , 0) = 0 and the Inada conditions:
K 0 2
lim FK () = ; lim FK () = 0.
K
2.2
Suppose that the economy is governed by a benevolent social planner who chooses sequences {Ct }0 , {Kt+1 }0 , {Lt }0 to maximize the intertemporal objective, max Et
X i=0
i U (Ct+i , Lt+i )
(2.1)
subject to initial conditions for the stock of capital K0 and technology A0 and to the following constraints. Total output of this economy Yt is produced using physical capital and labor: Yt = At F (Kt , Lt ) , (2.2)
where At is an exogenous productivity shifter, a technology shock whose dynamics will be specied further. In this closed economy without government, output is used for two purposes: consumption and augmenting the capital stock, i.e. investment. Ct + It = Yt (2.3) The stock of capital Kt accumulates obeying the following dynamic equation (this is a rough approximation to the method used in practice, called the perpetual inventory method, to construct the capital stock) where we assumed that depreciation is constant and It the amount invested at t in the capital stock: Kt+1 = (1 )Kt + It (2.4)
Finally, the amount of time spent working is bounded above by time endowment, normalized to unity: Lt 1. We will assume an interior solution, such that some time is always devoted to leisure: Lt < 1. To solve this problem consolidate all equality constraints into a single one and express consumption as a function of future capital, present capital and hours worked: Ct + Kt+1 = (1 )Kt + At F (Kt , Lt ) (2.5)
13
You can now solve our optimization problem using one of the techniques you learned. For example, using the Euler equation apparatus, we dierentiate the following objective function with respect to next periods state (capital) and hours worked: max Et
X i=0
{Kt+1 ,Lt }
The rst-order equilibrium conditions with respect to Kt+1 and Lt respectively (together with the budget constraint above) are: (2.6) UC (Ct , Lt ) = Et {UC (Ct+1 , Lt+1 ) [At+1 FK (Kt+1 , Lt+1 ) + 1 ]} UL (Ct , Lt ) = UC (Ct , Lt ) At FL (Kt , Lt ) (2.7) The rst equation states that the marginal cost of saving a unit of the consumption good today be equal to the expected marginal benet of saving this tomorrow times the gross benet of augmenting the capital stock by saving, where the latter is given by the marginal product of capital minus depreciation. The second equation states that the marginal disutility of working be equal to the marginal benet of working, in utility terms. Alternatively, it equates the marginal rate of substitution between consumption and hours worked UL (Ct , Lt ) /UC (Ct , Lt ) to the marginal rate at which labor is transformed into the consumption good: At FL (Kt , Lt ). In terms of practical implementation, you will usually want to ensure (especially when you are dealing with much larger models) that you do have as many equations as variables in order to solve your model. In this simple example, we have the two rst-order conditions plus the resource constraint for three variables Ct , Kt+1 , Lt . (If you wanted to solve for investment and output you would simply use 2.3 and 2.4). However, note a more subtle point related to nding the whole path of solutions for the variables of interest. Lets abstract from labor, for example by assuming that the household does not care about it at all, so we drop (2.7) and Lt from all equations. We need to nd the entire paths for {Ct }0 , {Kt+1 }0 from 2.5 and the rst equation of 2.6 and we have an initial condition for the capital stock K0 . You may be tempted to think that we are done, since 2.6 is a rst-order dierence equation, and we have one initial condition. This is misleading, since while 2.6 is a rst-dierence equation in C, it is a second-order dierence equation in K , the variable for which we have the initial condition. In fact, we can substitute Ct from 2.5 into 2.6 to obtain a second-order dierence equation in Kt , Kt+1 , Kt+2 . And we still have only one initial condition ... Why am I bothering you with this? Because you may now understand why we need an additional boundary condition on capital in order to solve for its entire optimal condition is the Transversality condition: i path {Kt+1 }0 . This limi Et UC (CS,t+i , .) Kt+i = 0. Finally, note that since the model is stochastic, the decision rules are not found at time 0 and then remain unchanged; a new realization of the shock
14
each period changes agents information set. This makes decision rules statecontingent: how much to consume, work, save etc., depends on the state of the economy in a given period. The state of this model is bi-dimensional: (At , Kt ) , where A is an exogenous state and K is an endogenous state. Therefore, formally decision rules that solve the system of equilibrium conditions are best written as Ct (At , Kt ) ; Kt+1 (At , Kt ) ; Lt (At , Kt ) .
2.3
In most applications, you will want to study economies where decisions are made by economic agents in a decentralised way, rather than planned economies. We therefore study a decentralised, rational expectations competitive equilibrium of the baseline RBC model. There are many ways we could decentralise the model economy described above, and here we choose the simplest one: a sequential competitive equilibrium in which households and rms interact each period as specied below in markets. Households own the stock of capital (and hence own the rms since all capital is physical capital) and have to decide: how much capital to accumulate, how much to consume and how much to work in any given period. Let superscript s on any variable denote the households counterpart to the aggregate one: e.g. s Kt +1 is households stock of capital next period, etc. In order to avoid confusion, let us use bold letters for aggregate values, e.g. aggregate capital stock is Kt+1 . Households earn a wage rate Wt from working in rms and a rental rate K from renting capital to rms each period Rt ; they take both these prices2 as given (remember this is a purely frictionless economy), where these prices are functions of the aggregate state of the economy: W (At , Kt ) ; RK (At , Kt ). In a rational expectations equilibrium, agents will forecast these prices, and will have to know the functional forms W () and RK () . They also have to know the laws of motion for At and Kt . The households will solve: max Et
X i=0
i U Ct+i , Ls t+i
(2.8)
(2.9)
The left hand-side species how much the household spends on consumption and investment respectively. The right-hand side species that households resources
2 Do not get confused by this terminology: RK is NOT the price of capital in this economy, t but the return on capital, net of depreciation. The price of capital is xed to 1, since the same good is used for consumption and investment, and investment is reversible (you can transform the capital good back into the consumption good freely). (See the relevant section on Asset Pricing). Investment adjustment costs would change this feature (see the part taught by Professor Muellbauer).
2.3. COMPETITIVE EQUILIBRIUM (OR DECENTRALISING THE PLANNER OUTCOME)15 come from labor income, capital income (from renting capital to rms) and prot s s s income (if any). Since the law of motion for capital is still Kt +1 = (1 )Kt + It we can write this as: s s s K Ct + Kt (2.10) +1 = Wt Lt + 1 + Rt Kt + t ,
where we could further dene the net real interest rate of this economy as Rt K Rt , rental rate net of depreciation. Using the same technique as before to s s s s solve this problem, the decision rules of the household Ct (At , Kt , Kt ) ; Kt +1 (At , Kt , Kt ) ; Lt (At , Kt , Kt ) are a solution to the optimality conditions (together with 2.10 and transversality): K s Kt+1 : UC (Ct , Ls (2.11) Rt+1 + 1 t ) = Et UC Ct+1 , Lt+1 s s Lt : UL (Ct , Lt ) = UC (Ct , Lt ) Wt (2.12) Firms choose how much labor to hire and capital to rent in the spot markets from the household in order to produce the consumption good of this economy. Let a d superscript stand for value of variable from rms standpoint. Firms d are perfectly competitive and choose Kt and Ld t to solve max t each period, i.e. maximize prots d d K d t = At F Kt , Lt Wt Ld (2.13) t + Rt Kt ,
where the rst term denotes the rms sales and the term in brackets is the total cost of producing. Optimization leads to: d d At FL Kt , Lt (2.14) = Wt , d d K At FK Kt , Lt = Rt .
Since the production function exhibits constant returns to scale, prots will always be zero - just replace these factor prices in the expression for prots and apply Eulers theorem. Exercise 7 Tricky(-ish): can you tell how many rms produce in this economy?
Market clearing. Households and rms meet in spot markets every period and equilibrium requires that all these markets clear. Counting the markets properly, ensuring their clearing, and understanding how this works is an essential part in practical modelling - and may not be trivial in large models. In this simple economy there are three markets: for labor, capital, and for the consumption good (output). Importantly, when stating the equilibrium conditions for an economy with n markets, you only need to specify market clearing conditions for n 1 markets. Walras Law ensures that then the nth market will also be in equilibrium. In our case, we limit ourselves to factor markets: Ld t d Kt+1 = Ls t s = Kt +1 (= Kt+1 ) .
16
Note that I wrote the capital market clearing condition at t + 1 -this may be helpful for any market clearing concerning state variables in practical implementation. Finally, consistence of individual and aggregate decisions requires that the law of motion for capital conjectured by households has to coincide in equis librium with the aggregate law of motion: Kt +1 (At , Kt , Kt ) = Kt+1 (At , Kt ) . Exercise 8 Prove that Walras Law holds in this economy, i.e. that Yt = Ct + It . Exercise 9 4AM: Apply Dynamic Programming (Bellman Principle) techniques that you have learned both to the centralized and the decentralized economies and show that the solution you get is equivalent to the solution in these notes.
2.4
Welfare theorems
You may recall from Microeconomics (or if you havent seen this, you will see it in the Micro course next term) that the Pareto optimum (planner equilibrium) and competitive equilibrium coincide under certain conditions on preferences, technology, etc. The following exercise asks you to show that this is the case in our model: Exercise 10 Show heuristically that the planner and competitive equilibria coincide (hint: show that rst-order conditions coincide). Note that this result applies to this very simple, frictionless economy. In most applications the planner and the competitive equilibria will be dierent: the competitive equilibrium will be sub-optimal due to the presence of externalities, distortionary taxes, trading frictions, etc. I hope we do get to see some examples of this later in the course. When welfare theorems do apply, however, the big advantage is that we can move freely between competitive and planner equilibrium, and the latter is usually i. unique and ii. easy to calculate (solution to a concave programming problem). Otherwise, existence and uniqueness of a competitive equilibrium may not be trivial to establish.
2.5
Functional forms
In order to simplify analytics, I will now introduce functional forms. We already assumed that the production function is homogenous of degree one (exhibits constant returns to scale). Let us assume that it is of the Cobb-Douglas form, consistent with growth facts:
1 Yt = At Kt Lt ,
where is the capital share - if capital is being paid its marginal product, it earns an share of output. Note that marginal products of capital and labor
2.6. STEADY-STATE AND CONDITIONS ON PREFERENCES respectively are (equal to rental rate and wage):
1 1 K Rt = At Kt Lt =
17
Yt Yt ; Wt = (1 ) At Kt Lt = (1 ) Kt Lt
You immediately see that the capital and labor shares in total output are constant and equal to the respective exponents i the production function. I will also specialize preferences to take the form: U (Ct , Lt ) = ln Ct v (Lt ) , (2.15)
where v () is the disutility of labor and is continuously dierentiable, increasing and convex. This additively separable utility function is consistent with balanced growth and has some other desirable properties spelled out below. There do exist non-separable utility functions that are consistent with balanced-growth that you may want to use in your applications - see the original article by King, Plosser and Rebelo (1988) dealing with these issues.
2.6
We will exploit the welfare theorems in the remainder and focus on the Planner equilibrium in solving the model. You should note, however, that all the techniques described here can be equally applied to the decentralized equilibrium. To start with, we want to ensure that our model has a unique non-stochastic steady-state that is consistent with some growth stylised facts reviewed before, concerning some ratios being constant (see the part taught by Dr. Roland Meeks on Growth). In the non-stochastic steady-state, all variables Xt are constant Xt+1 = Xt and technology is normalized to 1, At+1 = At = 1. (note that we abstract from growth: this can be incorporated by assuming that At+1 = (1 + g ) At where g would be the exogenous rate of growth). Moreover, we can drop the expectations operator. The Euler equation 2.6 evaluated at the steady state yields: 1 = FK (K, L)+ 1 . Since the marginal product of capital depends on the capital-labor ratio, 1 = it follows directly that the latter is also constant in steady-state: K L 1 1 + . Since the marginal product of labor (real wage) also depends only on the capital-labor ratio, this will also be constant and can be written as a function of deep parameters. Using the denition of the real interest rate we nd: R = 1 1. Capital accumulation evaluated at steady state yields the I investment-to-capital ratio K = - in steady state, investment merely replaces depreciating capital. Finally, an important remark on the properties of hours worked in steadystate is in order (this confuses surprisingly many people, please do not be among them!!!). Since we observe in post-war data that there is a long-run trend in wages, but no such trend in hours, we want steady-state hours worked to be independent of the wage. Moreover, more generally, we want preferences to be consistent with constant hours for a straightforward reason - per-capita hours
18
are simply bounded above by the time endowment, so they cannot grow (can you work more than 24 hours?). It turns out that the utility function we have chosen does yield constant steady-state hours. Using the functional form of the utility function to evaluate the intratemporal optimality condition we have: W vL (L) = . C Assume for simplicity (this is in no way necessary) that C = W L, you see that this becomes LvL (L) = 1 and hours are independent of the wage or any other potentially trending component. Again, there do exist more general (nonseparable) preferences exhibiting this property, but this is enough to make our point. Rather than trying to nd constant steady-state ratios, etc., as I have done above, lets try to solve for the steady-state explicitly. To do that, we evaluate the equilibrium conditions at the steady state and use the assumed functional forms for F and U . We assume that technology is constant and equal to A (we do not normalize A = 1 as previously). Also, since the steady-state real interest rate and the discount factor are related one-to-one by R = 1 1 I will treat R as a parameter rather than (just for analytical convenience). From the Euler equation in steady-state we obtain consumption as a function of labor: 1 R + 1 K= L A Substituting this into the reduced constraint 2.5 we have consumption as a function of labor: 1 R + 1 R + 1 C = A L L A A 1 R + 1 R + = L A (1 ) (R + ) 1 = 1 , R + 1 + R [(1 ) (R + )]
which after assuming a functional form for vL (L) can be solved for L, allowing thence to solve for all other variables. Note, consistent with the intuition above, that hours are independent of the level of technology. Steady-state hours do, however, depend on preferences. Consider for example a standard functional 1+ form for v (L) = L 1+ , leading to vL (L) = L . Substituting this in the expression above we have 1 " # 1+ 1 1 L= (2.16) 1 + R [(1 ) (R + )]1 Intuitively, the more the agent dislikes work (the higher ), the less she works in steady-state.
2.7. LOGLINEARISATION
19
2.7
Loglinearisation
The model we have described consists of a system of non-linear stochastic difference equations. Finding closed-form solutions for these is impossible, unless we assume that there is full depreciation and log utility in consumption (see the Chapter on RBC in David Romers textbook for this special case). In general, we need to resort to approximation techniques - and there are many that people have used (see the article by Cooley and Prescott for a non-exhaustive review). Arguably the most widely used technique relies on taking a rst-order approximation to the equilibrium conditions around the non-stochastic steady-state and studying the behaviour of endogenous variables in response to small stochastic perturbations to the exogenous process. This is the approach we will follow here. This is an instance of the implicit function theorem that you must have seen in Maths: calculating the eects of changes in some parameters (here, the stochastic shocks) on the solution to some system of equation for the variables (here, the optimal decision rules for endogenous variables that are implicitly dened by the system of equilibrium conditions). Let small-case letters denote percentage deviations of the upper-case variable form its steady-state value, or equivalently log-deviations. e.g. for any variable Xt we have Xt Xt X xt ln ' , X X where the last approximation follows from ln (1 + a) ' a.
2.7.1
Intermezzo on loglinearisation
There are many ways you can loglinearize an equation and you should always try to do it in two dierent ways to minimize the probability of mistakes. In order to loglinearize a (any) system of equations, here is the only one theorem you need to know. Suppose we have a nonlinear equation relating two dierentiable Zt ) dened over vectors of variables t ) and H ( 1 2 functions G (X n 1 m Xt = Xt , Xt , ..., Xt , ..., Zt ; Zt = Zt : G (Xt ) = H (Zt ) This equation can be approximated to a rst order around the steady-state values X and Z (of course, equation also holds in steady state G (X) = H (Z)) using the Taylor expansion by: G (X) (Xt X) ' H (Z) (Zt Z) , where G (X) is the gradient of G with respect to X, i.e. vector the row G n . stacking the partial derivatives evaluated at steady state Gi () = X i i=1 This expression contains absolute deviations of each variable from its steadyi state value, Xt X i (whereas what we are after are log-deviations). We can derive the log-linearized version of this by multiplying and dividing each term in
20
the summation by the steady-state value to get for each term obtaining: n m X X i Gi (X) X i xi ' Hi (Z) Z i zt t
i=1 i=1
' X i xi t,
This has the advantage of being very general -indeed pretty much any equation can be written in this form. In many cases, however, you can make use of the much simpler tricks: = X (1 + xt ) = XZ (1 + xt + zt ) f 0 (X ) X f (Xt ) = f (X ) 1 + xt , f (X ) Xt Xt Zt
X )X where f f(( X ) is the elasticity of f with respect to X. For example, the last equation says that the log-deviation of f is approximately equal to the elasticity of f times the log-deviation of its argument.
0
2.7.2
Lets loglinearize our equations. I will use dierent ways of loglinearising for each of them, not to confuse you but to give you as many tools as possible. To be sure of your result, you can always apply the most general method I gave you above. Some of them are really easy: e.g. the production function is already log-linear, in the sense that taking logs of it we get a linear equation: ln Yt = ln At + ln Kt + (1 ) ln Lt . Evaluate this at steady state and subtract from it the resulting steady-state equation, and you get (note that this holds exactly, it is not an approximation): yt = at + kt + (1 )lt (2.17)
The capital accumulation equation is not log-linear, but we can loglinearize as follows. Divide through by Kt to get : Kt+1 It = (1 ) + Kt Kt Applying the second trick above you get: K I (1 + kt+1 kt ) = 1 + (1 + it kt ) , K K Simplifying: I (it kt ) , K and substituting the investment to capital SS ratio: kt+1 kt = kt+1 = (1 ) kt + it (2.18)
2.7. LOGLINEARISATION
21
The Euler equation is (having substituted the functional form of the utility function): Ct Yt+1 1 = Et +1 Ct+1 Kt+1 This is a bit trickier because it involves expectations. However, due to our focus on rst-order approximations we implicitly assume certainty equivalence, so the expectation of a non-linear function of a random variable will be equal (to rst order) to the function of the expectation of that variable. Having noted this, lets write the perfect-foresight version of the equation3 : Yt+1 Ct+1 = +1 Ct Kt+1 and apply the second trick again: Y C (1 + ct+1 ct ) = (1 + yt+1 kt+1 ) + 1 . C K The constant terms drop out again, so: ct+1 ct = Y (yt+1 kt+1 ) K
Y + = R and taking expectations (reRecalling that in steady state we have K member capital is a state variable, Et kt+1 = kt+1 ), the loglinearised Euler equation is: R+ (2.19) Et ct+1 ct = (Et yt+1 kt+1 ) 1+R This equation captures intertemporal substitution in consumption: when marginal product of capital is expected to be high, expected consumption growth is high (consumption today falls as the planner saves to augment the capital stock). Note that we have normalized the elasticity of intertemporal substitution (the curvature of the utility function) to unity implicitly by assuming a log utility function. Further discussion of this follows when studying the decentralized economy. Yt 1 Loglinearisation of the intratemporal optimality condition vL (Lt ) = (1 ) L t Ct yields (applying the third trick):
vL (L) [1 + lt ] = (1 )
Y 1 (1 + yt lt ct ) , LC
where vLL L/vL is the elasticity of the marginal disutility of work to variations in hours worked. A more useful interpretation of this parameter can be found in the loglinearisation of the competitive economy. Simplifying we get: (1 + ) lt = yt ct ,
3 See
(2.20)
the next section for a loglinearisation of the Euler equation without working with the perfect-foresight version.
22
The economy resource constraint is loglinearised as follows. Apply the rst trick to get Y (1 + yt ) = C (1 + ct ) + I (1 + it ) Divide through by Y and use that this also holds in steady-state, i.e. Y = C + I, to get: C I yt = ct + it Y Y We have not yet calculated the steady-state ratios that appear here (consumption to output and investment to output). What I want to emphasize is that often the steady-state ratios you calculate are informed by the loglinearisation. We have found the ratio of investment to capital and the ratio of output to capital previously, and the share of investment to output is just the ratio of the two. The share of consumption in output is then easily found: I Y yt I K C I = ; = 1 , so: KY R+ Y Y = 1 ct + it R+ R+ =
(2.21)
A boring but necessary part follows now. We need to count equations and endogenous variables and ensure that the numbers square - i.e. we have as many endogenous variables as equations4 . We have 5 variables we want to solve for: y, c, i, k, l and 5 equations: 2.17,2.19, 2.18, 2.20,2.21. Exercise 11 To make sure you get familiarity with this, substitute out investment i and output y from the above system and get a system in three equations and three unknowns: c, k, l. Now try to loglinearize directly the nonlinear 2.6,2.7 and 2.5 and make sure you get exactly the same result (of course, after using the functional forms for F and U that we have assumed). Finally, we need to specify the dynamics of technology - since this is the forcing exogenous stochastic process. It is standard in the literature to assume that At follows an AR(1) process in logs, i.e.: at = at1 + t , where t is white noise. We will return to issues of measurement of a subsequently.
4 You may think this is trivial, but I can assure you that you will not laugh when you try to build your own models. You can bet that in rst instance you will always end up with at east one equation or variable too many (or too few...). Of course, you can do the counting after you have derived the fully non-linear equilibrium, but make sure that when you loglinearize you will use the same conditions.
2.7. LOGLINEARISATION
23
2.7.3
For completion, lets loglinearize the equilibrium conditions of the competitive economy. To make our life easier, lets use the market clearing conditions and substitute demand and supply for actual aggregate quantities (e.g. K s and K d replaced by K, etc.). The production function and capital accumulation concern the environment, i.e. are primitives of the model. Therefore, they are identical to the planner equilibrium above 2.17, 2.18. The Euler equation is (having substituted also the denition of real interest rate): Ct 1 = Et (Rt+1 + 1) Ct+1 As before, certainty equivalence makes it easy to get the loglinearised Euler equation. Note that since the interest rate is already a rate (so it is in percentage points), we need not take its log-deviations: specically, we dene rt+1 Rt+1 R. To see this, take logs of the perfect-foresight version of the Euler equation (i.e. dropping the expectation operator) to get: 0 = ln + ln Ct ln Ct+1 + ln (1 + Rt+1 ) . Add and subtract ln C and use that = (1 + R)1 to get ct+1 ct = ln (1 + Rt+1 ) ln (1 + R) ' Rt+1 R rt+1 and take expectations (again, we can do this due to certainty equivalence) to get the loglinearised Euler equation: Et ct+1 ct = Et rt+1 (2.22)
You can also get this equation by assuming that real interest rates and future consumption are lognormal and homoskedastic5 . The Euler equation in logs becomes (again, I (ab)use the equality sign for ln (Rt+1 + 1) = Rt+1 ): 1 ln Ct = ln + ln Et Ct +1 (Rt+1 + 1) 1 1 1 = = ln + Et ln Ct +1 (Rt+1 + 1) + vart ln Ct+1 (Rt+1 + 1) 2 1 1 = ln Et ln Ct+1 + Et Rt+1 + vart (ln Ct+1 ) + vart (Rt+1 ) covt (Rt+1 ln Ct+1 ) 2 2 Under homoskedasticity the conditional second moments are constant and we can drop their time subscript. Hence, evaluating this equation at steady-state and subtracting the result from the dynamic equation we get 2.22 (constants, including second moments, drop out). Equation 2.22 captures intertemporal substitution in consumption: when real interest rates are expected to be high, expected consumption growth is high (consumption today falls as the household saves). Note that we have normalized the elasticity of intertemporal substitution (the curvature of the utility function) to unity implicitly by assuming a log utility function. In general, the eect of interest rates on consumption will depend on this parameter.
5 If
24
Combining the denition of the gross interest rate with the equilibrium expression for the rental rate we have 1 + Rt+1 = Yt+1 + 1 , Kt+1
Y + which, evaluated at steady-state gives K = R . A rst-order approximation using the rst trick on the left-hand side and the second trick on the right-hand side yields:
(1 + R) (1 + rt+1 ) =
Y (1 + yt+1 kt+1 ) + 1 K
Y Substituting K from the expression just derived (after eliminating the constant terms) we get: R+ rt+1 = (yt+1 kt+1 ) 1+R
Finally, loglinearisation of the intratemporal optimality condition vL (Lt ) = Wt /Ct yields (applying the third trick): lt = wt ct , where vLL /vL L is the elasticity of the marginal disutility of work to variations in hours worked. More importantly, is referred to as the inverse elasticity of labor supply l to changes in the wage rate w, keeping xed consumption c.6 When = 0, labor supply is innitely elastic - when demand shifts, the household is ready to work all the extra hours for the given real wage. Also, in this case consumption is independent of non-wage income, and hence of wealth. When , labor supply is inelastic, and any labor demand shift generates movement in the real wage, while hours stay xed. NOTE: many papers specify the utility function over leisure, 1 Lt rather than hours, by having an utility function of the form: ln Ct +h (1 Lt ) where h () is a continuously dierentiable, increasing and concave function. Make sure you are able to derive the rst-order conditions in this case. Notably, the elasticity of labor supply becomes L/ (1 L) and hence depends on steady-state hours worked. The expression for real wage and rental rate are already loglinear, so we have: wt K rt = yt lt = yt kt
6 Because utility is separable in consumption and work, is also the inverse Frisch elasticity of hours to wage, i.e. the elasticity keeping xed not consumption, but the marginal utility of consumption (which, when utility is separable, is proportional with consumption). For more general, non-separable preferences, the Frisch elasticity and the elasticity keeping consumption xed are dierent objects.
2.8. CALIBRATION
25
These can be equivalently thought of as factor demands by the rm, for a given level of output. As you would expect, demand is decreasing in the respective factor price. The budget constraint of the household is: C I WL RK K K ct + it = (wt + lt ) + rt + kt Y Y Y Y
You can convince yourselves that this is the same as the economy resource constraint that we found in the planner equilibrium: merely substitute the loglinearised expressions for the real wage and rental rate, and the steady-state shares W L/Y and RK K/Y to get just yt on the right-hand side. This makes once more the point that the goods market clearing condition (as it should be called in a competitive equilibrium), or the resource constraint, is in fact redundant once we have written down all other equilibrium conditions. Why? Because it is a linear combination of other equilibrium conditions and hence does not have any new information in it! If you -for some strange reason- insist on having this equation when solving the model (i.e. by including it in your computer code), you need to drop one of the following: expressions for factor prices, market clearing conditions, or the household budget constraint itself. Counting variables and equations again, note that compared to the planner economy we have three more variables (w, rK and r) and three more equations: two factor prices, and one as the denition of the interest rate. As I hope you already anticipate, after substituting out these three variables we get precisely the same equations as in the planner equilibrium - since the two equilibria are equivalent (something we have shown in the general, non-linear case and carries through for the approximate equilibrium).
2.8
Calibration
By steady-state analysis we found how steady-state variables and ratios (many of which appear in the loglinearised equilibrium) are related to deep parameters, i.e. parameters pertaining to preferences and technology, say X = S () , where is the vector of all these Greek letters, in our case , , , . This mapping is one-to-one. An important step in solving the model is to get numbers for either of these. There are many ways to do this - basically you have to choose whether to treat steady-state variables as observables and solve for Greeks by inverting S : = S 1 (X ) or treat deep parameters as known and solve for steady-state values using X = S (). The rst approach is the closest in spirit to what original RBC proponents had in mind: use only observable data on macroeconomic aggregates from the National Income accounts -NIPA- and, using the steady-state relationships between Greeks and steady-state ratios, nd the Greeks. This requires a great deal of knowledge of NIPA data and sometimes important choices and a judgement about which macroeconomic aggregates to use (e.g. how to treat consumer durables, prots, what is depreciation, etc.) - dierent choices imply dierent
26
values for X and, obviously, dierent (sometimes very dierent) values for deep parameters. The Users Manua l for this approach is the article by Cooley and Prescott. Example: Lets do this for our model. The discount factor is pinned down by the steady-state condition R = 1 1, hence by merely looking at the average value of the interest rate we have a value for (you do have to decide which interest rate to use though; King and Rebelo use the average return to capital as given by the average return on Standard&Poors 500). Alternatively, Y dont use the interest rate but nd the discount factor by using 1 = K +1, which means you rst have to nd and and then pick to match the capitaloutput ratio. In our simple model, the depreciation rate is simply equal to the I share of investment to capital, = K . is simply found by recognizing that K it is the share of capital income in total income: = R Y K (this sounds far simpler than it is - getting the right measure for capital income is very tricky -see Cooley and Prescott). [IMPORTANT: Be careful to transform all rates interest, discount, depreciation, etc.- such that you have quarterly values! Also, make sure to use per-capita values for the aggregates since the model economy is per-capita]. Arguably the most dicult parameter to choose is the elasticity of labor supply - or the elasticity of intertemporal substitution in labor supply. This is when even the most hard-core RBC theorist has to give up - or use a cheap x. This x consists of using special functional forms for preferences such that this elasticity is not parameterized, but xed. Within the preference class we work with here, this is achieved by assuming either that v (L) = ln L, which eectively yields a unit elasticity of hours to wages vLL L/vL = 1 or that v (L) = L, which eectively leads to an innitely elastic labor supply since vLL L/vL = 0. In the general case, however, one needs to pick a value for this parameter7 . Usually, one needs to resort to microeconomic studies estimating this elasticities, although it is not clear at all whether this parameter bears any relationship to what micreconometricians actually estimate. Finally, the relative weight the agent places on labor in the utility function can be inferred once all other parameters have been calculated by assuming a value for steady-state hours worked L and using the expression 2.16. The other approach, which is best described as parameterization, simply picks values for the Greeks from micro evidence (never use the word calibration if you use this approach in your own work, especially if some Minnesottabred colleague is in the audience). This approach is being abused in the literature, especially when using very large models with many parameters, and you are better o avoiding it whenever possible.
7 See Prescott (1986) for a way of getting an empirical elasticity based only on aggregate data, by using both household and establishment data on hours worked.
27
2.9
Our model is nally a system of expectational dierence equations. The three equations are (I am just re-stating the loglinearised equilibrium conditions for k, c, l, so I have eliminated i and y by eliminating the equations with a star from the system: R+ Et ct+1 ct = 1+ R (Et yt+1 kt+1 ) = (1 ) kt + it k t+1 R+ + ): ct it = yt + 1 R yt = at + kt + (1 )lt (1 + ) lt = yt ct , R+ R+ R+ (2.23) kt+1 = (1 + R) kt + at + (1 )lt ct R+ R+ (2.24) (1 ) (Et lt+1 kt+1 ) + Et at+1 1+R 1+R 1 1 lt = (2.25) at + kt ct , + + + There are many methods to solve this, and you have seen at least a couple of them with Dr. Meeks8 . Since most models are larger than this, you usually do need to use the computer. What I want to show you here is how you can solve the simple example by hand, make you understand that the solution principle is quite general, and therefore that the same technique is applied when using the computer. Lets assume that labor is inelastic, for the moment. This allows me to solve this model analitically and be really transparent about what is happening in the black box that many of you may feel computer codes for solving systems of linear expectational equations are. We will return to elastic labor when solving the model numerically. The reasons things get simple is that the equation for hours merely becomes: Et ct+1 ct = lt = 0, and the system in standard form is (I have substituted kt+1 in the rst equation using the second in order to have the endogenous variables on the right hand side appear only at time t): R+ (R + ) (1 ) Et ct+1 = 1+ (2.26) ct 1+R R + (R + ) (1 ) R+ (R + ) (1 )kt at + Et at+1 1+R 1+R R+ R+ kt+1 = (1 + R) kt (2.27) ct + at
8 See Campbel (1994) for an analytical solution relying on the undetermined coe cients method.
28
In matrix form (recall that Et kt+1 = kt+1 ), letting xt be the vector of endoge 0 nous variables ct kt : Et xt+1 = xt + at , = = " 1+ (R+)(1) R+ 1+R + R
R+
"
+ (R+)(1) R + 1+R
R+ 1+R M
(R + ) (1 ) 1+R # ,
(2.28) (2.29)
where M is a general operator - we postulate that the expectation of future technology is this linear function of current technology - for example if the shock is AR(1), M = . But this solution method works even if shocks are not AR(1) (e.g. if the shock is AR of higher order, M will also contain lag operators). So how do we solve this? If you are tempted to say: this is aP vector autore gression, so iterate backwards (or use lag operators) to get xt = i=0 i uti , you should really read carefully the next Section since this is plainly WRONG. NOTE: many solution procedures write the system in a slightly dierent way, expressing current variables as a function of their future expected values and shocks: xt = x Et xt+1 + a at . (2.30) This representation is equivalent to the previous one once you recognize x = 1 ; a = 1 .
2.9.1
Solving forward and backward: (local) stability, indeterminacy and equilibrium uniqueness
As a general rule, please remember always that control variables should be solved forward and state (predetermined) variables backward. There is no mysticism involved in this. Control variables are decided upon by the agent looking into the future, maximizing the expected value of the objective function: the future distribution of shocks will hence matter, and we have no initial value from which to start. State or predetermined variables have already been decided upon at time t; indeed, they summarise the whole history of the economy, we have initial values for them and the whole past distribution of shocks will matter. Suppose st is a state variable for which we have the equation, where u is an exogenous shock: st+1 = s st + ut . P i This can be easily solved backward to yield st+1 = i=0 (s ) uti if the 9 stability condition s < 1 is met . If this condition is not met, the equation is
9 Remember that you can solve this equation either by backward iteration or by lag 1 1 operators:st+1 = 1 ut , and recall that 1 = 1 + s L + (s L)2 + ... + (s L)i + .... The sL sL powers of s die away if the stability condition is met, otherwise thie whole thing explodes and the equation is unstable.
29
unstable. Suppose x is a control variable and you have the following equation dictating its dynamics: Et yt+1 = y yt + ut The way to solve this is to iterate forward10 (or use the forward operator F, F yt = yt+1 ) after writing: yt = (y ) to get yt = Et
1
Et yt+1 (y )
X i=0
ut
(y )
i1
ut+i
Clearly, you can do this if and only if y > 1, which is the opposite of what you need for a backward equation. These were simple univariate examples, but the same principle applies when you have multivariate models. It is a general result due to Blanchard and Kahn (Econometrica) that: in a system of n linear expectational dierence equations, if m variables are predetermined, or state variables (and the rest n m are not, i.e. are control variables), there exists a unique solution if and only if exactly m roots (eigenvalues) of the transition matrix of that system are inside the unit circle. If too few roots are inside the circle, we have instability: no stable equilibrium exists, pretty much as in our example above where s > 1. We are simply unable to solve some of the backward equations backward. If too many roots are inside the unit circle, we have equilibrium indeterminacy: we are unable to solve some forward equations forward. For an excellent treatment of these issues (and not only) I warmly recommend the book by Roger Farmer: The macroeconomics of self-fullling prophecies published at MIT Press. Lets try to understand this better by returning to our simple bivariate example. What is so wrong with solving the whole system backwards was that consumption is a control, forward-looking variable. This is not only a technical point, it is economic intuition: at the core of this model (and most models analysing business cycles) lies the permanent income hypothesis; consumption depends on the present discounted value of future income, i.e. on lifetime resources. Capital, on the other hand, is a state variable: it summarizes the history of the economy, i.e. al l the past choices regarding consumption versus investment. So we need to solve one equation forward and one backward. The two equations, as they appear now, are not independent and cannot be solved separately. However, we can uncouple them by applying a result from linear algebra that uses the eigenvalue decomposition of . Consistent with our intuition above (and with the Blanchard-Kahn result), we need one eigenvalue of to be inside and one outside the unit circle. Lets rst see whether this is the case. You can either solve for the eigenvalues by brute force (not recommended)
1 1 0 Solve this e.g. by iterating forward: x = ( )1 E x ut = (x )2 Et xt+2 t x t t+1 (x ) (x )2 Et ut+1 (x )1 ut , and so forth (using the law of iterated expectations).
30
or show this more elegantly, e.g. as follows. First, notice that the determinant of is det = 1+ R > 1. Since the determinant is the product of the eigenvalues (remember?), one of the eigenvalues will always be outside the unit circle; hence, we will never have too few explosive roots - the model will not be indeterminate. We still need to prove that the model will not have too many explosive roots, i.e. that a locally stable solution exists. The characteristic polynomial of , which has as its roots the eigenvalues 1,2 , is J () = 2 trace () + det , where the trace is trace () = 2 + R + (R + ) (1 ) 1+R R+
The condition for existence of an unique RE equilibrium implies: J (1) J (1) < 0. Since J (1) = 1trace() + det and J (1) = 1+trace() + det , we see )(1) R+ immediately that J (1) = (R+1+ R < 0 and J (1) = 4 + 2R + (R+ )(1) R+ 1+R > 0. QED. Since J (0) > 0, both roots are positive (there are no oscillatory dynamics). We nd the roots by solving q 2 trace () (trace ()) 4 det () J () = 0 = . 2 Note that the smaller root (0, 1) is stable and the larger one + is unstable. We know from linear algebra that we can decompose our non-singular square matrix as: = P P 1 , is a diagonal matrix with the eigenvalues + , as entries and P is a matrix stacking the eigenvectors corresponding to these eigenvalues11 . Replace this decomposition in our system: Et xt+1 = P P 1 xt + at , pre-multiply by P 1 and dene the new variables zt P 1 xt to get: Et zt+1 = zt + P 1 at Now these equations ARE uncoupled and we can solve them separately. The rst one is forward-looking and has an explosive root + > 1, as it should (denote the rst element of z by z c to remind ourselves it comes from consumption): 1 1 1 c c = 1. at , zt + Et zt+1 + P
1 1 Recall that e.g. the rst eigenvector ( rst column of P ), call it p corresponding to , + + is found from: p+ = + p+
2.9. SOLVING THE MODEL where for any matrix G, [G]i. denotes its ith row. The solution is:
c = zt X i=0
31
Now you can use whatever process you want for at (however, recall that due to our loglinearisation technique we restrict attention to small shocks). For exam i P c ple for our AR(1) process, Et at+i = i at , so: zt = (+ )1 P 1 1. at i=0 + and since < 1 < + , we have: 1 1 c = P 1. at zt + The second equation is backward-looking: 1 k k 2. at , zt +1 = zt + P
(+ )i1 P 1 1. Et at+i
To nd the paths of investment and output you merely use the production function in this case (recall that lt = 0): yt = at + kt and the resource constraint. Exercise 12 In order to make sure that you understand this solution method, try to solve the model written in the forward form 2.30 and show you get precisely the same solution.
and can be solved in a standard way (the moving average representation can be easily found but is not particularly informative). To nd the paths of consumption and capital you merely need to calculate c zt ct = P zt = P xt k kt zt
2.9.2
Elastic labor
Turning to our more general case with < and eliminating hours, we can still express the model as a two-equation system: (R + ) (1 ) R+ 1+ R+ 1+ kt+1 = 1 + R + kt + at ct + + + (2.31) R + (1 ) R+ 1+ 1 R+ 1+ Et ct+1 = ct kt+1 + Et at+1 = +1+R 1+R + 1+R+ Let =
R+ 1 1+R + ;
R+ 1+ +
1 + ( ) ct (1 + R) kt at + Et at+1 1+ 1+ 1+ This system nests the inelastic-labor case (check this! note that implies R+ R+ 0; 1+ R (1 ) and ) and can be solved as before. Et ct+1 =
kt+1 = (1 + ) (1 + R) kt + at ( ) ct
32
2.9.3
The solution method described above relies on the Jordan decomposition of the transition matrix. In more complicated model this wont work, since the transition matrix may be singular. Solution methods are readily available for such cases relying on the Generalised Schur decomposition, a generalization of the Jordan decomposition, but in most instances they require numerical analysis (i.e. using the computer)12 . I strongly recommend to those interested in doing macro the article Computing sunspot equilibria in linear rational expectation models, by Lubik and Schorfheide, Journal of Economic Dynamics and Control, 2004. Whatever the solution method you use in general in large models, the solution under equilibrium determinacy is typically a recursive equation of the form: xt = Mx xt1 + Me et (2.33)
for the vector of variables x (note: x includes also the exogenous processes, such as a above) and white-noise shocks e.
2.10
Another advantage of using a microfounded model is that we can talk meaningfully about welfare. The welfare of the representative agent in our economy is summarized by the value function V / (Kt , At ). From the Bellman equation evaluated at the optimum (i.e. where we already recognised that we are along the optimal path and dropped the max13 ): / (Kt+1 , At+1 ) V / (Kt , At ) = U (Ct ) + Et V For ease of interpretation, we take a monotonic transformation of the value function and try to summarize welfare in a variable that is measured in consumption units, dening the new variable Vt implicitly from: / (Kt , At ) U (Vt ) = V Therefore: U (Vt ) = U (Ct ) + Et U (Vt+1 ) (2.34)
1 2 For examples of the use of these methods, see the Matlab codes and explanations/examples provided by Roland Meeks in the computational classes, using the solution method of Paul Klein and Ben McCallum. 1 3 Of course, the Bellman equation without imposing optimality is
/ (Kt+1 , At+1 )] , V / (Kt , At ) = max [U (Ct ) + Et V but by focusing on the optimal path for consumption (and implicitly for future periods capital stock) we can drop max .
33
In steady state we have (1 ) U (V ) = U (C ) and since U is bijective (1 ) V = C. A log-linear approximation to 2.34 gives (using the third trick): U 0 (V ) V U 0 (C ) C U 0 (V ) V U (V ) 1 + = U (C ) 1 + vt ct + U (V ) 1 + Et vt+1 U (V ) U (C ) U (V ) U 0 (C ) C U 0 (V ) V U 0 (V ) V vt = (1 ) ct + Et vt+1 U (V ) U (C ) U (V ) Assuming that the elasticity of utility with respect to its argument U 0 (X ) X/U (X ) is independent of the level of its argument X (which holds for most utility functions you will use, e.g. for CRRA, log, etc.) we get: vt = (1 ) ct + Et vt+1 . Using this equation, the path of vt can be simulated, as for any other variable, to assess the rst-order eect of shocks on the welfare of the representative agent. Further intuition can be gained by solving the equation forward to obtain: vt = (1 ) Et
X i=0
i ct+i
(2.35)
Therefore, the rst-order eect on the representative agents welfare of any shock that makes consumption deviate from its long-run trend (steady-state value) is measured by the expected present discounted value of these deviations scaled by 1 . Note that 1 is a very small number since is close to 1. Exercise 13 What is the welfare cost of uctuations in this economy? Hint: ask the question in a slightly dierent way: what is the value of eliminating business cycles ? Justify your answer in one sentence.
2.11
Having solved for all endogenous variables as a function of the exogenous driving force allows us to evaluate the model by comparing its predictions to the data. That is, we want to compute moments of our theoretical variables (that are, in the model, log-deviations from steady-state, or from a balanced-growth path) and compare them with moments of data variables, which are also in logdeviations from a trend component. To do that, we need to discuss two issues: i. how do we measure the technology shock; ii. how do we compute relevant moments.
2.11.1
Measurement of technology
What is the technology shock? How do we measure it? We have a theory that puts this shock at the heart of business cycle uctuations. So why not get data on this. Any attempt to connect to data sources such as Datastream and
34
search for something similar is useless - so dont even try. Also, you may read the FT and/or the Economist regularly (probably you should) - but never read about anything resembling this shock, thats supposed to generate the bulk of observed macroeconomic uctuations14 . What can we do about it? (i.e.: what have people done in the past?). The answer that people came up with is something you have already seen in the Growth lectures - extract the stochastic component of productivity from the Solow residual, i.e. the dierence between changes in output and changes in measured inputs. The details of the method vary to a large extent across studies, but in essence the method is as follows. Taking logs of our production function we have: ln At = ln Yt ln Kt (1 ) ln Lt (2.36)
You can use information on quarterly total measured output, hours worked (either from establishment or household data) and capital stock, together with the estimate of (see the calibration part) to calculate ln At . This is a less trivial problem than it seems, because of measurement issues. For example, no universally accepted measure of the capital stock exists, technology may contain a time trend in the data, etc. I give you two prominent examples from the literature of how people handled some of these issues. 1. Cooley and Prescott (1995) take rst dierences of 2.36 to get (note that ln At ln At1 = at at1 ) at at1 = (ln Yt ln Yt1 ) (ln Kt ln Kt1 ) (1 ) (ln Lt ln Lt1 ) (2.37) They assume quarterly variations in the capital stock to be zero (ln Kt ln Kt1 = 0), since this series is reported only annually and any method of interpolating a quarterly series would be arbitrary and give noise variability to both output and technology. They use real measured GNP data for Yt , and nd the shocks to be well-described by an AR(1) as we assumed above, with = 0.95 and = 0.007. 2. King and Rebelo (1999)s way of measuring the stochastic component in technology diers in two ways. First, they work with a production function that includes labor-augmenting technological progress Ht that grows exogenously (you have seen this in the growth lectures) to get a modied version of 2.36: ln SRt where ln SRt = ln Yt ln Kt (1 ) ln Lt = ln At + (1 ) ln Ht (2.38) (2.39)
Second, they do use a quarterly series for capital found (following Stock and Watsons paper in the Handbook, same Volume) by the permanent inventory method, i.e. generated from the investment series using the capital accumulation equation.
1 4 Although, to be honest, this has changed recently - see Martin Wolfs article in Wednesday 9th of Novembers FT on productivity in the UK.
35
Using the empirical measure found from 2.36 and the deterministic process for H : ln Ht = ln Ht1 + ln g , where g is the exogenous growth rate, you can estimate a process for ln At . Simply t a linear trend to ln SRt to nd g and use the residuals to estimate = 0.979 and = 0.0072. Eliminating the trend is consistent with the model being expressed as log-deviations from the steadystate, and hence being stationary (we could have introduced labor-augmenting technological progress from the outset, case in which we would have needed to loglinearize around the balanced-growth path rather than a constant steady state - see King and Rebelo for such a model).
2.12
This section presents some impulse response analysis and an intuitive discussion. First, let us remember what an impulse response function actually is. Take the simplest AR(1) process that describes our productivity: at = at1 + t , (2.40) where t is iid N 0, 2 . Suppose we are at time t and we start at the steadystate (so at1 = 0), when an unexpected one-time shock t occurs, and let t = 1. The impact response of at to t is therefore given simply by 1. To nd out what happens from time t + 1 onwards, simply scroll (2.40) forward one period: at+1 = at + t+1 = 2 at1 + t + t+1 = , (2.41) where the last equality follows from at1 = 0 (we started at steady state) and t+1 = 0 (the shock is one-o ). Similarly, you nd the impulse response of a at horizon t + j, at+j to a unit shock at time t, t as: at+j = j , j 0 This is the very simple impulse-response function for our very simple AR(1) productivity shock. Note that the impulse response function is given by the coecients in the moving average representation; take (2.40) and invert it using the lag operator (or do repeated substitution): at = X 1 i ti , t = 1 L i=0
(2.42)
The response of at to a unit shock i periods ago is given by the corresponding coecient in the MA representation, i . Lets now focus on our endogenous variables and try to nd their impulse responses to a unit technology shock. Things are as simple as for the productivity process once we recall that the solution of the linear rational expectations system of equations in the most general case can be represented in the V AR (1) form by (2.33) restated here: xt = Mx xt1 + M t ,
36
where xt is a vector comprising all our variables, including at . The impact responses are given in the impact vector M since we start at steady-state where xt1 = 0. You can invert (2.33) just as before to get: xt = (I Mx L)1 M t =
X i=0
(Mx )i M ti ,
where I is an identity matrix of appropriate dimensions. So the response to a unit one-o shock occurring i periods ago on todays variables are found in: (Mx )i M .
2.12.1
Figure 1 plots the response to a unit technology shock under two scenarios regarding labor supply elasticity for our otherwise baseline calibration (these are obtained by running the Matlab codes that I included in the Appendix and will make available to you). Look rst at the solid blue line, plotting the inelastic labor case. Technology increases and this increase is persistent. Ceteris paribus, this increases the productivity of both labor and capital, and hence their marginal products. From the standpoint of the households, this increase in both factor prices translates into an increase in the willingness to invest (since labor is inelastic. the labor supply curve is vertical - all the increase in labor demand is accommodated by an increase in the real wage). It also implies that households will consume more. However, note that since the interest rate will be falling, the household nds it optimal to save some of this increase in wealth and postpone consumption - this is why you see a hump-shaped impulse-response for consumption. From period 2 onwards, investment starts adding to the capital stock of the economy (although capital does not react on impact - remember it is a predetermined variable) and output keeps expanding. Note that the maximum response of consumption is reached in the same period where the interest rate cuts the horizontal axis: when the interest rate becomes negative, it is optimal to substitute consumption intertemporally from the future into today. With elastic labor, the responses change as follows. When productivity increases, rms increase labor demand. You can see this by looking at the labor demand equation for rms and noting that capital does not respond on impact: d LD : wt = at + kt lt . The household is willing to accommodate some of that increase in demand by working more due to an income eect (whereas with inelastic labor, all this increase in demand translates into an increase in wages):
s LS : lt = wt ct .
Labor market equilibrium ensures that the real wage will hence increase by less than in the inelastic-labor case. However, the increase in hours leads to a larger increase in the marginal product of capital - therefore, it is optimal to
37
invest more, thereby augmenting the capital stock even more (and ensuring a further expansion in labor demand from time t + 1 onwards). Both the increase in hours worked and capital ensure that the expansion in output is larger. Recall that also governs the intertemporal elasticity of substitution in labor supply (you get this equation by substituting for consumption from the labor supply equation into the Euler equation):
This says that ceteris paribus, if I expect the real wage to be higher tomorrow than it is today, I want to postpone some work for tomorrow (and the more so, the lower is , i.e. the higher is the elasticity). This intertemporal substitution eect usually works in the opposite direction of the income eect if the real wage is expected to be increasing for some period (as it is in our case - see the red dashed line). However, the net eect on hours worked is positive (and hence the income eect dominates) since wage growth is expected to be negative for most of the adjustment path. Moreover, since the expected real interest rate is positive at least in the rst quarters, there is an intertemporal substitution eect of the interest rate that says you should give up some leisure (work more) today. All these eects disappear when .
38
0.6
Real wage
20
40
60
20
40
60
20
40
60
Productivity
20
40
60
20
40
60
20
40
60
Figure 2 brings this to an extreme, plotting (blue solid line) the case whereby labor supply is innitely elastic, = 0 (the indivisible labor case). The eects described previously are amplied even further. Note that consumption will track the real wage, and hours adjust fully in order to ensure this optimality condition is met. Very importantly, note that although the labor supply curve is horizontal when = 0, the real wage still moves! This is due to intertemporal substitution in consumption: on impact, the agent consumes some of the increase in productivity, saves some (the interest rate is high today) and also works as many hours as demanded by the rm. The real wage that clears the labor market is wt = ct and is also equal to the marginal product of labor.
39
Investment
20 Labor
40
60
0 0
20
40
60
0 0
20 Capital
40
60
1.5 1 0.5
0.5 0 -0.5 0 0 0
0.5
20
40
60
20
40
60
0 0
20
40
60
Productivity
20
40
60
-1 0
20
40
60
20
40
60
2.12.2
Figure 3 emphasizes the role of persistence by plotting, together with the baseline calibration (red dashed line; notably, labor elasticity is 2), a case whereby the shock is one-o, with zero persistence (blue solid line). Since the marginal product of labor increases, the household again nd it optimal to work more hours today: the real wage is very high today as compared to all future periods (remember, the household knows that this shocks is temporary). This increase in hours adds to the direct eect of the increase of productivity to obtain an even larger increase in output. This increase in output ought to be allocated between consumption and investment. Since the interest rate is high
40
today compared to all future periods, it is optimal to save and invest most of the increase in output (postponing some of the gains for consumption in future periods), and to consume only a small fraction today. Investment increases on impact by a large amount - roughly four times the increase in output. From period 2 onwards, there is no productivity increase. The household nds itself with a higher capital stock (due to previous investment), which she will now optimally consume - and hence disinvest; this is optimal since the interest rate is now low relative to future periods. Consumption and leisure are both normal goods, and the household wants to enjoy more of both (the relative quantities being dictated by the elasticity of labor supply): therefore, hours worked also fall below their steady-state level.
These transitional dynamics emphasize a point rst made by Cogley and Nason (1995)- that the benchmark RBC model lacks a strong internal propagation mechanism, or features too little endogenous persistence. Periods of high output are not systematically followed by periods of similarly high output in response to purely transitory shocks. This has led most studies to focus on models in which persistence is inherited from the exogenous process (such as in the responses with red dashed lines); others have focused on enhancing the internal propagation mechanism.
41
Inv estment 10
10
20 Labor
30
40
10
20 Real wage
30
40
-5 0
10
20 Capital
30
40
1.5
1 0.5 0.5
10
20 Interest rate
30
40
0 0
10
20 Rental rate
30
40
0 0
10
20 Productivity
30
40
1 0.5 0
0 -0.02 0 -1 0 0 0
10
20
30
40
10
20
30
40
10
20
30
40
Finally, it is worth to consider the case whereby changes in technology are permanent - i.e. there is a unit root in the technology process. We compare this with our benchmark case in Figure 4. There are two main dierences. The rst (the easier one) is that permanent (as opposed to temporary, albeit very persistent) changes in technology have permanent eects on output, consumption, investment, real wage and capital (not hours!!!). You can calculate these eects analitically by taking the derivative of the steady-state variables we have calculated in section 2.5 with respect to A. Secondly, there are important dierences concerning transitional dynamics. If technology is permanently higher there are wealth eects that are absent otherwise - the household recognizes that it will be permanently richer. These eects combine with the wage eect: as before, the household understands that
42
wages, despite having increased, are lower than in all future periods. Therefore, on impact it will choose to undertake more leisure and work less than in the persistent but temporary shock case. The same eects make the household willing to consume more, and therefore invest less. These choices are consistent with the path of the interest rate, which is monotonically decreasing over time after a positive initial response. Moreover, since interest rates never fall below their steady-state value, consumption is monotonically increasing towards its new steady-state value (it does not overshoot). Investment does overshoot its new steady-state value precisely because the marginal product of capital is high in the rst period.
FIGURE 4: Responses to unit technology shock, the role of shock persistence Output 1.5 1.5 Consumption 6 Inv estment
0.5
0 0
20
40
60
80
100
0 0
20
40
60
80
100
Capital
0.5
0.5
20
40
60
80
100
0 0
20
40
60
80
100
0 0
20
40
60
80
100
Rental rate 1
Productiv ity
0.5
20
40
60
80
100
20
40
60
80
100
0 0
20
40
60
80
100
43
2.13
Second moments
Computation of second moments can be achieved by Monte Carlo simulations (something I wont bother you with) or analitically. Lets use (2.33) to calculate the covariance matrix of xt , xx = E (xt x0 t ), noting that we know the covariance matrix of the shocks ee = E (et e0 ) (in the simple one-shock case, merely the t variance of the shock to technology). Furthermore, since we stationary representations of the econ only consider omy, such that xx = E xt+j x0 for any j . Hence, we have t+j 0 0 0 0 0 + Mx E (xt1 e0 xx = Mx xx Mx t ) Me + Me E et xt1 Mx + Me ee Me Remembering that et are innovations, and hence orthogonal to xt , the terms in the middle are zero, so this reduces to:
0 0 xx = Mx xx Mx + Me ee Me
This is a matrix equation with solution (check you linear algebra books):
0 0 ) Me Me vec (ee ) , vec (xx ) = (I Mx Mx
where for any matrix ss , vec () denotes the column vector obtained by 0 . . . stacking its column vectors {.i }i=1,s on top of eachother .1 . ..2 ... ..i .... ..s , I is an identity matrix of dimension s s and is Kronecker product. Autocovariances E xt x0 (i.e. for leads/lags) are similarly computed: tj j j 0 E xt x0 tj = Mx xx + Me Mx ee Me .
These formulae can be easily programmed and are used to obtain the numbers you nd in the Tables - e.g. the standard deviations and autocorrelation of output, consumption, hours worked, investment, the correlations of each aggregate with output (contemporaneously and at leads and lags), and so on. Table 2: Moments for Baseline RBC model Variable x x x / y E [xt xt1 ] corr (x, y ) y 1.39 1.00 0.72 1.00 c 0.61 0.44 0.79 0.94 i 4.09 2.95 0.71 0.99 l 0.67 0.48 0.71 0.97 Y /L 0.75 0.54 0.76 0.98 w 0.75 0.54 0.76 0.98 r 0.05 0.04 0.71 0.95 A 0.94 0.68 0.72 1.00 Source: King and Rebelo, 1999 Read King and Rebelo, section 4.3 The moments of interest can be divided into two categories: 1. volatilities.
44
CHAPTER 2. THE BENCHMARK DSGE-RBC MODEL A rst test for the model is the Kydland-Prescott variance ratio: varmodel (y ) = vardata (y ) 1.39 1.81 2 = 0.77.
Investment is about three times more volatile than output. Consumption is smoother than output in both data and model but too smooth in model compared to data. Labors volatility relative to output is too small compared to the data (mainly because capital is not volatile enough). 2. persistence and correlations. The model generates persistence, but: (i) this persistence is lower than in the data and (ii) since the exogenous process is very persistent (a point to which we shall return below), it is clear that the model features a very weak internal propagation mechanism. I.e., the model generates little endogenous persistence (a point rst noted by Cogley and Nason (1995, AER)). The model also generates substantial co-movement of macroeconomic aggregates with output (as judged by the contemporaneous correlations), partly consistent with the data. However, there are some discrepancies: the correlations predicted by model for investment, labor, capital and productivity are larger than those found in the data. Moreover, the model generates a highly procyclical real wage (whereas in the data wages are roughly acyclical) and a highly procyclical interest rate (whereas in the data interest rates are countercyclical).
2.14
A lot. We have built a model that relies on maximization by all agents and rational expectations in order to analyze uctuations in macroeconomic times series. We learned how to solve this model step-by-step, how to understand the transmission of a technology shock and how to assess its merits by comparing its predictions to the data. You may well be appalled by the insistence on technology shocks as being the main source of uctuations (indeed, many people would say you should be). But the importance of this framework goes well beyond the focus on technology, as I tried to emphasize in the introduction. READ King and Rebelo - section 4.5.
2.14.1
Critical parameters
1. Highly persistent -and volatile- technology shock. Note that E (at atj ) = var() j 12 var () . The variance of productivity is hence 12 , which is increasing in both var () and . Moreover, increasing productivitys variance by increasing also implies increasing the persistence of productivity and hence overall persistence.
45
The crucial role of productivitys persistence can be better understood by conducting the following experiments. = 0 vs. = 0.979; = 0 vs. = 1. 2. Suciently elastic labor (either intertemporal or intratemporal - Greenwood, Hercowitz and Human) - section 6.1 of KR. This implies that work eort is highly responsive to changes in real wages. Therefore, it helps by generating both a lot of movement in hours and little movement in real wages. Assuming a highly elastic labor supply is inconsistent with micro evidence; however, models have been built that reconcile a low micro elasticity with a high macro elasticity. A prominent example is the indivisible labor model of Hansen and Rogerson - read KR, section 6.1 (and the references therein) if you want to know more. 3. Steady-state shares of C and I in Y (I/Y has to be small, otherwise the volatility of I would converge to that of Y).
2.14.2
Is the Solow residual the right measure for technology shocks? There are three main reasons why the answer is likely to be no: 1. The Solow residual can be forecasted using variables that are likely to be orthogonal to productivity: military spending, monetary aggregates, etc. 2. The Solow residual implies a large probability of technological regress (about 0.4 - Burnside, Eichenbaum and Rebelo, 1996). 3. Variable factor utilization (capital utilization and labor hoarding) contaminates the measured Solow residual. Basically, if there is unobserved variation in the utilization of factors of production, the Solow residual erroneously attributes this to technology since it is measured using observed variation in factors of production. You can think of this as an endogeneity bias. There are two possible ways to correct for variable utilization, i.e. address point 3 above, and doing that is also likely to inuence points 1 and 2. First, one can use proxies for unobserved variation and re-compute Solow residuals. Examples of such proxies are: (i) the number of work accidents as a proxy for unobserved work eort (since working harder increases the probability of accidents at least in an industrial setting); (ii) electricity use as a proxy for unobserved variation in capital utilization. The second possibility is to build a model that incorporates unobserved factor variation as an endogeous variable, express this as a function of other endogenous but observable variables and calculate the model-implied productivity series (see next section for an example). A corrected measure of the Solow residual that takes into account variable utilization (see Burnside, Eichenbaum and Rebelo, 1996) implies that: 1. productivity shocks are much less volatile; 2. the probability of technological regress drops dramatically.
46
These ndings imply that a stronger amplication mechanism than that of the baseline RBC model is needed to explain observed uctuations. Fortunately, the very same reason that biases the measure of the Solow residual also delivers this extra amplication. An RBC model incorporating variable utilization implies that small shocks to productivity have large aggregate eects.
2.14.3
In the part of the course taught by Professor Muellbauer, you will see two alternative specications of preferences and technology respectively. In the rst, the utility function is not intertemporally separable due to the presence of habits : that is, current utility depends on last periods consumption. In the second, investment is subject to adjustment costs. Once you will have covered this material, you may want to think of incorporating these features in the baseline RBC model. Both of these features, when introduced in our model, imply an extra channel by which endogenous persistence is generated. In the remainder, I will intuitively discuss the introduction of variable capital utilization that I hinted to above. In the presence of variable utilization, the production function becomes:
Yt = F (Zt Kt , Lt ) = At (Zt Kt ) L1 , t
Zt being the utilization rate. Using the capital stock more intensively aects the depreciation rate of capital, and the capital accumulation equation becomes: Kt+1 = (1 (Zt )) Kt + It (2.43)
where (Zt ) satises Z (Zt ) > 0, ZZ (Zt ) > 0 : the depreciation rate increases if capital is used more intensively, and does so at an increasing rate. We introduced an extra variable, Zt , hence we nee done extra equation governing the choice of the utilization rate in order to determine equilibrium. The extra benet of increasing the utilization rate is given by the extra output that is being 1 1 created, Fz (., .) = At Zt Kt Lt . The marginal cost of increasing utilization is given by the higher investment needed to replace capital that is depreciating faster: Z (Zt ) Kt . The optimal utilization rate is found by equating these two:
1 1 Kt Lt At Zt Yt Zt
= Z (Zt ) Kt = Z (Zt ) Kt
Lets loglinearise the production function and the eciency condition. The production function becomes: yt = at + kt + zt + (1 ) lt (2.44)
47
The eciency condition becomes (use the third trick to loglinearise Z (Zt ) , just as we did for the marginal disutility of labor vL (Lt )): yt zt yt ZZ (Z ) Z zt Z (Z ) = kt + (1 + ) zt , = kt +
(2.45)
where denotes the elasticity of the marginal depreciation rate induced by extra utilization to the utilization rate, ZZ (Z ) Z/ Z (Z ) . When this elasticity tends to , we are back in the standard model (2.45 implies zt = 0): note that i Z (Z ) 0 which instead implies that the depreciation rate is not aected by utilization. Substituting the eciency condition 2.45 into the production function 2.44 and eliminating zt , we obtain a reduced-form production function: (2.46) yt = 1 + at + kt + 1 lt . 1+ 1+ 1+ Two things can be noted by staring at this expression and comparing it with the benchmark case. First, the partial elasticity of output with respect to technology is increased - and the more so, the lower is , i.e. the more we depart from the benchmark model. This induces extra amplication of technology shocks. Second, the partial elasticity of output with respect to labor is higher (and hence the partial elasticity with respect to capital is lower) than in the benchmark case since 1+ < . This tightens the link between output and labor uctuations and will potentially generate more labor volatility. Relatedly, variable utilization also aects the cyclicality of wages. Wages are still given my the marginal product of labor, so: wt = yt lt = 1 + at + kt lt . 1+ 1+ 1+
Since 1+ is increasing in , a lower implies a atter labor demand curve which instead implies that for a given shift of labor supply the wage will react less and hours more than in the benchmark model. However, the shift in labor demand will necessarily be larger under variable utilization, since 1+ > 0. This is why in the simulation presented in Figure 5 the response of the real wage is higher, at least in the earlier quarters.
2.14.4
Finally, Figure 5 shows the eect of variable capacity utilization under the benchmark parameterization. The parameter in the lecture notes has been set to 0.1 in the variable-utilization case (a value borrowed from King and Rebelo), and to a very large value in the xed utilization case. The gure conrms our discussion in the lecture notes and shows that variable utilization leads to an amplication of a given technology shock. Therefore, smaller shocks are
48
0.5
0 -5 0
10
20 C apital
30
40
-1 0
10
20 Interest rate
30
40
10
20 U tilization rate
30
40
0 0
10
20 Productiv ity
30
40
3 2 1 0 -1 0
0.8
0.6
10
20
30
40
10
20
30
40
0.4 0
10
20
30
40
Figure 2.1: enough to explain observed uctuations. Note that the gure keeps labor supply elasticity at 2. The amplication induced by variable utilization is increasing with labor supply elasticity - for the innitely elastic labor case, we would end up with the high substitution economy in King and Rebelo. The Figure above abstracted from the re-measurement of the Solow residual that was one of the initial reasons to consider this extension in the rst place. Following our discussion of the Solow residual above, note the two routes that could be followed in re-measuring productivity using this model. First, one could use 2.44, nd a proxy for zt , say z t , (e.g. electricity use) and compute a t from: a t (1 ) lt (2.47) t = yt kt z Otherwise, one can use the structure of the model and arrive at the reduced-form
49
production (hence eliminating the unobservable variable zt ) and compute: a = 1 (2.48) yt kt (1 ) lt t 1+ 1+ As noted above, after taking variable utilization into account the variance of the computed productivity shocks is generally much lower, but the model is still as able to replicate uctuations because this very feature induces extra amplication, as demonstrated in Figure 5 and the discussion preceding it.
2.14.5
Where do we go
While the RBC model is successful at explaining some features of the data, it does a pretty poor job at explaining others. This is why research in this area has ourished over the past few decades, and keeps expanding by incorporating various frictions such as imperfect competition, imperfect price or wage adjustment, investment adjustment costs, imperfect labor or nancial markets etc. However, what you should take home from this is that (most of) modern macroeconomic literature takes this framework as a starting point in order to examine an incredible variety of issues. To give just one example we will not touch upon at all, modern monetary policy analysis also uses the baseline RBC model as a benchmark and adds imperfect price adjustment in order to examine the eects of nominal disturbances and optimal monetary policy. See Woodford (2003) if you are interested in these issues. For a recent paper on the eects of nominal disturbances and how to account for them in a frictions-rich DSGE model see Christiano, Eichenbaum and Evans (2005) .
2.14.6
Given the diculties of technology shocks to account for some of the data features, some authors have naturally looked at other shocks, for examples shocks to government spending. In particular, Christiano and Eichenbaum (AER, 1992) showed that adding this stochastic source of uctuations helps in resolving an important puzzle, i.e. the discrepancy between the high procyclicality of the real wage implied by the baseline RBC model and the relative acyclicality observed in the data. However, government spending shocks have other undesirable properties: they generally imply a countercyclical consumption, in stark contrast with the data. Consumption falls in response to government spending shocks due to a negative wealth eect: government spending absorbs resources and makes the agent feel poorer by the present discounted value of taxes that are used to nance this spending. This makes the agent consume less and work more for a given real wage; the latter eect implies that output increases. Therefore, conditional on government spending shocks, consumption will be countercyclical - in strong contrast to what you see in the data. If you want to know more on these issues, I recommend reading Baxter and King (AER, 1993) and Christiano and Eichenbaum (AER, 1992). Some recent
50
developments on these issues can be found in Gali, Lopez-Salido and Valles (2005) and in some of the references therein - in particular, you could check the ones whose authors names start with B (Read these recent papers after you covered sticky prices and monetary policy issues next term).
2.15
The single most embarassing prediction of the frictionless model: The Equity Premium Puzzle (Back to Asset Pricing).
I want to end this set of lectures with one example of a puzzle that has been known for about 20 years, and has not yet been resolved in a satisfactory way (you will review some attempts, as well as other puzzles in asset pricing, with Professor Muellbauer next term). This is a good example of good news for people who want to do Macro/Finance - theres a lot to do yet. This is the Equity Premium Puzzle (the original contribution is due to Mehra and Prescott, 1985). Empirically, the average return on equity, as judged by the return on S&P500, in the US economy over the past 50 years has been about 8.1% at an annualized rate. The risk-free rate, judged as return on Treasury Bills, has been much lower, at about 0.9%. Therefore, the equity premium is about 7.2%! The size of the premium varies depending on the period under consideration, the denition of returns, etc., but the main idea is always there: stocks give you a much higher return than bonds. How come? Can we reconcile this with our model? Remember our asset equations for stocks and riskless bonds respectively (I Pt+1 +Dt+1 S will now work with shares and denote their return 1 + Rt , but +1 Pt remember you could do the same with physical capital): S 1 = Et t,t+1 1 + Rt +1 1 = (1 + Rt+1 ) Et [t,t+1 ] I have put a t + 1 index on the return on bonds although it is known at time t, to emphasize that it is being paid at time t + 1. To understand the basic idea, let us assume that returns and the stochastic discount factor are jointly lognormal and homoskedastic, just as we assumed when we loglinearised the Euler equation for the RBC model. Remember that for a lognormal variable: 1 ln Et Xt+1 = Et ln Xt+1 + vart ln Xt+1 . 2 If X is also homoskedastic, conditional second moments are equal to unconditional second moments, so we can drop the time subscript on second moments: 1 ln Et Xt+1 = Et ln Xt+1 + var ln Xt+1 . 2
2.15. THE SINGLE MOST EMBARASSING PREDICTION OF THE FRICTIONLESS MODEL: THE EQUITY Taking logs of the share-price equation we get (as throughout this course, we use ln (1 + a) = a): 1 2 1 2 S S 0 = ln Et t,t+1 1 + Rt = Et t,t+1 + Et Rt +1 +1 + + S + S (2.49) 2 2
where t,t+1 = ln t,t+1 , 2 x = var [ln Xt+1 Et ln Xt+1 ] , i.e. the variance of S innovations to the log of variable X = Rt +1 , t,t+1 and S is similarly the unconditional covariance between the innovations to the logs of the returns and stochastic discount factor. Lets do the same now for the riskless asset - an asset whose return is known with certainty and uncorrelated with the stochastic discount factor: 1 0 = ln (1 + Rt+1 ) Et [t,t+1 ] = Et t,t+1 + Rt+1 + 2 2 Now subtract 2.50 from 2.49 to get: 1 2 S Equity Premium = Et Rt +1 Rt+1 + S = S 2 (2.51) (2.50)
This equation already illustrates the equity premium. The term on the left2 hand side it the equity premium, corrected for a measure of risk for stocks, 1 2 S , that comes simply from Jensens inequality (do not get confused about this; this term would disappear if we wrote the premium using the expectation of log S gross return on shares, so that the left-hand side becomes Et ln 1 + Rt +1 ln (1 + Rt+1 )). The right-hand side says that in this model the equity premium is given by the (negative of the) covariance of the stochastic discount factor with the share return. Consumers demand a high premium in order to hold this asset when the return on an asset co-varies negatively with the stochastic discount factor for a simple reason: the asset tends to have low returns and hence decrease the value of wealth precisely when consumers need it more (when the marginal utility of consumption is lower than in the future, i.e. when the stochastic discount C (Ct+1 ) factor is high: remember that t,t+1 = UU is high when marginal utility C (Ct ) of consumption today is low compared to the future one). A simple functional form for the utility function makes this more transparent. Consider 1 the CRRA utility function (and let labor supply be inelastic): U (C ) = C 1 / (1 ) , where parameterizes both the coecient of relative risk aversion and the (inverse of) the elasticity of intertemporal substitution. (When 1 we are back to the ln C case.) The stochastic discount factor in this case is t,t+1 = (Ct+1 /Ct ) , so (letting small letters denote logs): t,t+1 = ln ct+1 . The equity premium becomes: 1 2 S Equity Premium = Et Rt +1 Rt+1 + S = CS , 2 (2.52)
52
i.e. the product between the relative risk aversion coecient and the covariance between innovations to consumption and equity returns. Intuitively, a large equity premium is required when (for a given risk aversion) there is a high covariance between returns and consumption, for in this case the asset delivers low returns when consumption is low (when marginal utility of consumption is high). You can look at this equation in two ways. First you can think of the equity premium itself as the relevant moment to match. You build a general equilibrium model in which returns and consumption are determined endogenously and calculate the articial covariance between consumption and returns, parameterize risk aversion and try to see for which risk aversion you can match the observed equity premium of around 6.9%. Secondly, you can use observed, i.e. data covariance between consumption and returns and nd the implied risk aversion coecient, then ask yourself whether the number you get is reasonable. Most studies, independently of the approach they use and of the type of data employed, nd that a risk aversion of the order of is needed to explain the equity premium. This is much, much higher than any plausible empirical estimate; it also has highly unrealistic implications for the behaviour of individuals. Macro and microeconomists usually think of this number as being not higher than 2 or 3. A related puzzle came to be known as the risk-free rate puzzle (Weil 1989). Look at he equation for bond returns, substituting for the CRRA utility function: 2 Rt+1 = ln + Et ct+1 2 2 C Abstract from the last term. Given positive observed consumption growth, say c = Et ct+1 , a high risk aversion parameter can only be reconciled with low risk-free rates, as we observe in the data, if 1. This implies negative time preference, something I am sure most of you will consider implausible. Intuitively, CRRA utility links risk aversion and intertemporal substitution: high risk aversion automatically means low desire to substitute consumption intertemporally. A consumer who is unwilling to substitute intertemporally, when faced with low interest rates and positive consumption growth would like to bring consumption into the present, i.e. to borrow. A low interest rate can only be an equilibrium if the rate of time preference is very low or even negative. 2 The last term, 2 2 C , helps because for a high , the risk-free rate is brought down (the variance term is always positive). This term comes from a precautionary savings motive: agents want to save to protect themselves from uncertainty related to future consumption variability. This desire to save works against the tendency to borrow. You will study next term various attempts to deal with these puzzles based on non-separabilities in the utility function, on preferences that disentangle risk aversion from intertemporal substitution, on heterogenous agents, limited asset markets participation, etc.
30
Bibliography
[1] Baxter, M., and R. G. King (1993): Fiscal Policy in General Equilibrium, American Economic Review 83: 315-334. [2] Bilbiie, F. O., F. Ghironi, and M. J. Melitz (2006): Endogenous Entry, Product Variety and Business Cycles, manuscript, Oxford University, Boston College, and Harvard University. [3] Blanchard, O. J. and C. M. Kahn, (1980), "The solution of linear dierence models under rational expectations," Econometrica, 48, 5, 1305-12 [4] Campbell, J. Y., (1994): Inspecting the Mechanism: An Analytical Approach to the Stochastic Growth Model, Journal of Monetary Economics 33: 463-506. [5] Christiano, L., and M. Eichenbaum (1992): Current Real-Business-Cycle Theories and Aggregate Labor-Market Fluctuations, American Economic Review 82: 430-450 [6] Christiano, L., M. Eichenbaum and Evans (2005): Nominal Rigidities and the Dynamic Eects of a Shock to Monetary Policy, Journal of Political Economy, vol. 113, no. 1, 1-46 [7] Cochrane, J, 2004, Money as Stock, Forthcoming in Journal of Monetary Economics [8] Cogley, T., and J. M. Nason (1995): Output Dynamics in Real-BusinessCycle Models, American Economic Review 85: 492-511. [9] *Cooley, T. F. and E. Prescott, Economic growth and business cycles, Chapter 2 in Cooley, T.F. (Ed.), 1995. Frontiers of Business Cycle Research. Princeton Univ. Press, Princeton. [10] Farmer, R. (1999) The macroeconomics of self-fullling prophecies , MIT Press, Cambridge, MA. [11] Gal, J., J. D. Lpez-Salido, and J. Valls (2002): Understanding the Eects of Government Spending on Consumption, mimeo, CREI. 53
54
BIBLIOGRAPHY
[12] *King, R. and Rebelo, S. (2000) Resuscitating Real Business Cycles, in Taylor J. and M. Woodford, eds, Handbook of Macroeconomics, NorthHolland [13] Kydland, F.E., Prescott, E.C., (1982). Time to build and aggregate uctuations. Econometrica 50, 13451370. [14] Ljungquist, L. and T. Sargent (2004), Recursive Macroeconomic Theory, 2nd Edition, MIT Press, Cambridge, MA. [15] Lubik, T. and Schorfheide F. (2003) "Computing Sunspot Equilibria in Linear Rational Expectations Models" Journal of Economic Dynamics and Control, 28(2), pp. 273-285. [16] Lucas, Jr. R. E., (2005) , Present at the creation: Reections on the 2004 Nobel Prize to Finn Kydland and Edward Prescott, Review of Economic Dynamics 8 (2005) 777779 [17] Lucas, Jr. R. E., (1981) , Studies in Business-Cycle Theory, MIT Press, Cambridge, MA. [18] Lucas, Jr. R. E., (1985), Models of Business Cycles, Yrj Jahnsson Lectures, Basil Blackwell, Oxford. [19] Prescott, E., (1986) Theory ahead of business cycle measurement, Federal Reserve Bank of Minneapolis Quarterly Review 10. [20] Stokey, N.; R. Lucas with E. Prescott, (1989) Recursive Methods for Macroeconomic Dynamics , Harvard University Press, Cambridge, MA. [21] Woodford, M., (2003), Interest and prices: foundations of a theory of monetary policy, Princeton University Press, Princeton, NJ.
Appendix A
Matlab programmes
A.1 Matlab code for the basline, elastic-labor model
The following code solves the baseline RBC model and plots impulse responses. I included a loop over parameters that allows you to plot responses for dierent parameterizations on the same graph. You need the le solvek.m that you already have from Dr. Meeks, from the computational classes. Indeed, the codes here use precisely the same solution method. You also need dimpulse.m, but this should already be in the Toolbox. Make sure these are in the same directory as the model le. Note: mu in the code is labor supply elasticity, i.e. 1 .
clear all; %Number of Variables nx is the number of variables in the whole system, nz is the number of non-predetermined variables nx = 11; nz = 2; nu = nz; %/****************************************************/ % @ Parameters of the model;@ %/****************************************************/ %Deep Parameters loop=1; %loop over parameters; here loop over L elasticity and A persistence for loop =1:2 if loop==1 mu=0; %labor elasticity phia=1; else mu=0; %labor elasticity phia=0.979; end r=0.01; delta=0.025;
55
56
%@ Equation 10: wage @ B(6,wpos)=1; B(6,ypos)=-1; B(6,hpos)=1; %@ Rental rate @ B(7,rkpos)=1; B(7,ypos)=-1; B(7,kpos)=1; %@ Resource constraint (could replace with HH budget constraint) @ B(8,ypos)=1; B(8,ipos)=-si; B(8,cpos)=-sc; %Dene E(t)cs(t+1) A(9,cpos) = 1; B(9,ec1pos) = 1; %Dene E(t)r(t+1) A(10,rpos) = 1; B(10,er1pos) = 1; %Dene E(t)k(t+1) A(11,kpos) = 1; B(11,ek1pos) = 1; phi(apos,apos) = phia; %technology process introduced here phi(gpos,gpos) = 0; [m,n,p,q,z22h,s,t,lambda] = solvek(A,B,C,phi,nk); bigmn = [m n]; bigpq = [p q]; bigp = phi; bigpsi = eye(nz,nu); %%%%%%%%%%%%%%%%%% IRF analysis %%%%%%%%%%%%%%%%%%%%%%%% % Have to specify ires and ishock, the index values for the % responding variable and the shock % Using the solution of the model in state space form % x(t+1) = Ax(t) + Bu(t+1) % y(t) = Cx(t) + Du(t) ishock = 1; npts = 100; % no of points plotted % Loop over parameters, assign calculated solution for each case to one IRF if loop ==1 A1 = [p q;zeros(nz,nk) phi]; C1 = [m n]; D1 = zeros(nx-nk,nu); B1 = [zeros(nk,nu);bigpsi]; %[Y,X]=dimpulse(A,B,C,D,ishock,npts+1); %this does not work when have capital dened as here [Y1,X1]=dimpulse(A1,B1,C1,D1,ishock,npts+2);
58
YP1=Y1(2:npts+2,:); XP1=X1(2:npts+2,:); else A2 = [p q;zeros(nz,nk) phi]; C2 = [m n]; D2 = zeros(nx-nk,nu); B2 = [zeros(nk,nu);bigpsi]; %[Y,X]=dimpulse(A,B,C,D,ishock,npts+1); %this does not work when have capital dened as here [Y2,X2]=dimpulse(A2,B2,C2,D2,ishock,npts+2); YP2=Y2(2:npts+2,:); XP2=X2(2:npts+2,:); end %ends if statement %move on after rst case: loop = loop+1; end %ends for statement jj=[0:npts]; %i1 = Y(:,ires); % column index is the element of y you want subplot(3,3,1) plot(jj,YP1(:,ypos), jj,YP2(:,ypos),-.r) title(Output) %axis([0 20 -.5 .5]) legend(rhoa=1,rhoa=0.979) text(0,YP1(1,ypos)+0.7,FIGURE 4: Responses to unit technology shock, the role of shock persistence) subplot(3,3,2) plot(jj,YP1(:,cpos),jj,YP2(:,cpos),-.r) title(Consumption) %axis([0 20 -.25 .25]) subplot(3,3,3) plot(jj,YP1(:,ipos),jj,YP2(:,ipos),-.r) title(Investment) %axis([0 20 -.5 .5]) subplot(3,3,4) plot(jj,YP1(:,hpos),jj,YP2(:,hpos),-.r) title(Labor) %axis([0 20 -1.5 1.5]) subplot(3,3,5) plot(jj,YP1(:,wpos),jj,YP2(:,wpos),-.r) title(Real wage) %axis([0 20 -.5 .5]) subplot(3,3,6) %plot(jj,YP(:,ek1pos)) plot(jj,XP1(:,1),jj,XP2(:,1),-.r)
59
A.2
The following code solves- and plots impulse responses - for the variable utilization model.
clear all; %Number of Variables nx is the number of variables in the whole system, nz is the number of non-predetermined variables nx = 12; nz = 2; nu = nz; %/****************************************************/ % @ Parameters of the model;@ %/****************************************************/ %Deep Parameters loop=1; %loop over parameters; here loop over L elasticity and A persistence for loop =1:2 if loop==1 mu=2; %labor elasticity phia=0.979; csi=5000; %this insures (almost) consistency with elasdelta=0 elasdelta=0;%this insures deriv. of delta wrt z is zero delta=0.025; else mu=2; %labor elasticity phia=0.979; csi=0.1; elasdelta=1+csi; %this in when delta=Z^(1+csi)/(1+csi) delta=0.025; end r=0.01;
60
61
%@ Equation 10: wage @ B(6,wpos)=1; B(6,ypos)=-1; B(6,hpos)=1; %@ Rental rate @ B(7,rkpos)=1; B(7,ypos)=-1; B(7,kpos)=1; %@ Resource constraint (could replace with HH budget constraint) @ B(8,ypos)=1; B(8,ipos)=-si; B(8,cpos)=-sc; %Dene E(t)cs(t+1) A(9,cpos) = 1; B(9,ec1pos) = 1; %Dene E(t)r(t+1) A(10,rpos) = 1; B(10,er1pos) = 1; %Dene E(t)k(t+1) A(11,kpos) = 1; B(11,ek1pos) = 1; %cient utilization B(12,ypos) = 1; B(12,kpos) = -1; B(12,zpos) = -(1+csi); phi(apos,apos) = phia; %technology process introduced here phi(gpos,gpos) = 0; [m,n,p,q,z22h,s,t,lambda] = solvek(A,B,C,phi,nk); bigmn = [m n]; bigpq = [p q]; bigp = phi; bigpsi = eye(nz,nu); %%%%%%%%%%%%%%%%%% IRF analysis %%%%%%%%%%%%%%%%%%%%%%%% % Have to specify ires and ishock, the index values for the % responding variable and the shock % Using the solution of the model in state space form % x(t+1) = Ax(t) + Bu(t+1) % y(t) = Cx(t) + Du(t) ishock = 1; npts = 40; % no of points plotted % Loop over parameters, assign calculated solution for each case to one IRF if loop ==1
62
A1 = [p q;zeros(nz,nk) phi]; C1 = [m n]; D1 = zeros(nx-nk,nu); B1 = [zeros(nk,nu);bigpsi]; %[Y,X]=dimpulse(A,B,C,D,ishock,npts+1); %this does not work when have capital dened as here [Y1,X1]=dimpulse(A1,B1,C1,D1,ishock,npts+2); YP1=Y1(2:npts+2,:); XP1=X1(2:npts+2,:); else A2 = [p q;zeros(nz,nk) phi]; C2 = [m n]; D2 = zeros(nx-nk,nu); B2 = [zeros(nk,nu);bigpsi]; %[Y,X]=dimpulse(A,B,C,D,ishock,npts+1); %this does not work when have capital dened as here [Y2,X2]=dimpulse(A2,B2,C2,D2,ishock,npts+2); YP2=Y2(2:npts+2,:); XP2=X2(2:npts+2,:); end %ends if statement %move on after rst case: loop = loop+1; end %ends for statement jj=[0:npts]; %i1 = Y(:,ires); % column index is the element of y you want subplot(3,3,1) plot(jj,YP1(:,ypos), jj,YP2(:,ypos),-.r) title(Output) %axis([0 20 -.5 .5]) legend(xed utilization,variable utilization) text(0,YP2(1,ypos)+0.6,FIGURE 5: Responses to unit technology shock, the role of variable utilization) subplot(3,3,2) plot(jj,YP1(:,cpos),jj,YP2(:,cpos),-.r) title(Consumption) %axis([0 20 -.25 .25]) subplot(3,3,3) plot(jj,YP1(:,ipos),jj,YP2(:,ipos),-.r) title(Investment) %axis([0 20 -.5 .5]) subplot(3,3,4) plot(jj,YP1(:,hpos),jj,YP2(:,hpos),-.r) title(Labor) %axis([0 20 -1.5 1.5])
63