Richmond Fed Research Digest: Frictional Wage Dispersion in Search Models: A Quantitative Assessment
Richmond Fed Research Digest: Frictional Wage Dispersion in Search Models: A Quantitative Assessment
Richmond Fed Research Digest: Frictional Wage Dispersion in Search Models: A Quantitative Assessment
Summaries of work by economists in the Banks Research Department published externally from June 1, 2011, through May 31, 2012 Welcome to the inaugural issue of the Richmond Fed Research Digest. The Federal Reserve Bank of Richmond produces several publications that feature the work of economists in its Research Department, but those economists also publish extensively in other venues. The Richmond Fed Research Digest, a mid-year annual, brings this externally published research together in one place with brief summaries, full citations, and links to the original work. (Please note that access to articles may require registration or payment.) So bookmark this spot on the Richmond Fed website and mark your calendar for June 28, 2013, when the Bank will publish the next issue of the Richmond Fed Research Digest.
2012
The mean-min wage ratio turns out to be a valuable tool for interpreting the empirical findings in the literature that estimates structural search models.
The authors of this paperAndreas Hornstein of the Richmond Fed, Per Krusell of Stockholm University, and Giovanni Violante of New York Universitypropose using job search theory and data on labor market turnover to estimate frictional wage dispersion. They measure wage dispersion through the mean-min wage ratio, that is, the ratio of the average accepted wage to the lowest accepted wage. For many job search models, the authors demonstrate that the mean-min wage ratio can be calculated by considering only worker preferences (reflected in the value of nonmarket time and the interest rate) and worker-flow statistics.
In the most basic job search model, the mean-min wage ratio is a function of the interest rate, the value of nonmarket time, the rate at which workers exit from employment, and the rate at which workers exit from unemployment. For empirically reasonable values of these parameters, the mean-min wage ratio is only 1.05. In other words, the average accepted wage is 5 percent above the lowest accepted wage. The most basic job search model relies on
Page 1
four assumptions: perfect correlation between job values and initial wage, risk neutrality, random search, and no on-the-job search. The authors relax these assumptions one by one to study the effects on frictional wage dispersion as measured by the mean-min wage ratio. Relaxing the first three assumptions does not change the ratio significantly, but allowing on-the-job search raises the ratio to 1.25, a five-fold increase in relative terms, though still a fairly small ratio in absolute terms. The authors then find that further modifications of on-the-job search modelssuch as introducing endogenous search effort, counteroffers among competing employers, or commitments to wage-tenure contractscan generate much higher frictional wage dispersions, with mean-min ratios greater than 2. The mean-min wage ratio turns out to be a valuable tool for interpreting the empirical findings in the literature that estimates structural search models. Since the authors have shown that frictional wage dispersion is small in standard search models, researchers who estimate structural search models using worker-flow statistics and wage-distribution data must tolerate very large estimates of measurement error, substantial unobserved worker heterogeneity, or implausible parameter values, such as very low estimates of the value of nonmarket time or extremely high estimates of the interest rate.
http://dx.doi.org/10.1257/aer.101.7.2873
alling housing prices since the start of the Great Recession have led to a greater number of mortgage defaults and foreclosures. Defaults on home loans became prevalent in the United States at the height of the crisis as borrowers either suffered income shocks that prevented them from making payments or chose to default on underwater properties. Andra Ghent of the City University of New York and Marianna Kudlyak of the Richmond Fed explore one factor that may influence default risk: recourse loans versus non-recourse loans. Under a non-recourse loan, the borrower is not liable for more than the value of the collateral. In the case of homes that have fallen in value since the loans were issued, borrowers who default on mortgages in non-recourse states will not be sued for the difference between the value of the home and the unpaid balance of the mortgage. Recourse loans, however, do allow lenders to seek additional payment. Ghent and Kudlyak look at home loans originated between August 1997 and December 2008, a data set comprising 2.9 million mortgages, with recourse loans accounting for roughly two-thirds of that total. They explore whether borrowers in states allowing for recourse loans were more or less likely to default on their loans. Ghent and Kudlyak find that recourse laws do have a significant impact on borrowers gains from defaulting, leading to different threshold values for default in recourse states versus non-recourse states. Since recourse adds an additional cost to defaulting, borrowers with recourse loans would be less likely to choose default at the same level of negative equity relative to non-recourse borrowers. Indeed, other things equal, it takes 8.6 percent more negative equity in a recourse state to reach the same probability of default as in a non-recourse state. On average, the probability of default is 32 percent higher in non-recourse states when measuring the interaction between the law and the value of the default option.
Page 2
The authors also find that this effect is not uniform for all households. Lenders can recover greater value in recourse proceedings from households with more assets, which are more likely to have mortgages on higher-value properties. Thus, recourse deters default on mortgages for properties worth $200,000 or more, an effect that generally becomes more pronounced as values rise. For homes appraised between $300,000 and $500,000, the probability of default is 81 percent higher in non-recourse states. For homes ranging from $500,000 to $750,000, it is more than 100 percent higher. Ghent and Kudlyak also explore other effects of recourse loans on borrower behavior. Borrowers in recourse states are more likely to cure their defaultsthat is, to resume payments to some degree within one year following an initial 60-day delinquency. The authors propose (but cannot confirm) that one contributing factor to this higher cure rate may be that a greater share of defaults in recourse states are driven by liquidity constraints rather than strategic decisions. Alternatively, the higher cure rate may indicate that borrowers in recourse states are not aware that their mortgages allow for recourse until they become delinquent. Ghent and Kudlyak suggest that both factors may contribute to the results, and they note that the lower default rate for recourse loans provides evidence that some mortgage defaults in the overall data set are strategic.
http://dx.doi.org/10.1093/rfs/hhr055
ayment card usage in the United States continues to grow. Recent data show that consumers use either debit or credit cards to make more than half of their purchases. Many payment card services are free to consumers, but card networks charge merchant fees (often referred to as interchange fees) for each transaction. Merchants have complained that these fees are unreasonably high. The federal government recently capped debit card interchange fees and also has considered other regulatory proposals. One suggested change would require card networks to charge fixed fees per transaction rather than fees proportional to the size of each purchase, as has been common practice.
Oz Shy of the Boston Fed and Zhu Wang of the Richmond Fed (this paper was written while Wang was at the Kansas City Fed) explore why card networks charge proporWhen card networks tional fees rather than fixed per-transaction fees, and they compare the social welfare and merchants both implications of both regimes. They note that the costs of providing payment card have market power, card services cannot explain the decision by card networks to charge proportional fees. In particular, debit cards do not provide float and bear very little fraud risk, so there networks earn higher appears to be no cost basis for charging proportional fees. To address this puzzle, Shy profits by charging and Wang develop a simple model where consumers submit purchases to their card networks, and the networks remit those payments to the merchants minus either a proportional fees. proportional or fixed fee. They find that when card networks and merchants both have market power (that is, they face little competition), card networks earn higher profits by charging proportional fees. They also find that competition among merchants reduces card networks gains from using proportional fees relative to fixed per-transaction fees. Merchants are found to earn lower profits under proportional fees, whereas consumer utility and social welfare are higher.
Richmond Fed Research Digest 2012 Page 3
Shy and Wangs findings shed light on related policy debates. Since card networks with market power charge fees higher than the marginal costs of handling transactions, the resulting market allocation may deviate from the social optimum. This concerns policymakers who try to align payment card fees with the cost basis. While it may be difficult to directly regulate card-fee levels, it seems natural and easy to regulate card-fee structures, such as requiring fixed per-transaction fees for payment cards that incur only a fixed cost per transaction. However, Shy and Wangs findings suggest that such a regulation may increase merchant profits at the expense of card networks, consumers, and social welfare. Therefore, caution should be taken when policymakers consider intervening in the payment card market.
http://dx.doi.org/10.1257/aer.101.4.1575
Page 4
Hetzel advocates more interaction between FOMC members and academic economists. Regional Reserve Bank presidents, for example, could host quarterly forums with academic economists in their districts, and members of the Board of Governors could engage nationally recognized experts in similar discussions. Both groups should translate this dialogue into language the general public can understand. Hetzel also recommends that the Fed establish a Monetary Policy Evaluation Group that would act as external auditors of the policy process. This proposed group of economists would test different classes of models and assess alternative strategies for monetary policy outside of any institutional pressure to rationalize past actions. The group would make predictions about the outcomes of current FOMC policy and compare those predictions to actual outcomes. These reforms would help the Federal Reserve conduct and explain monetary policy in a better analytical and institutional framework.
http://dx.doi.org/10.1016/j.jmacro.2012.02.010
http://dx.doi.org/10.1016/j.euroecorev.2011.10.002
Richmond Fed Research Digest 2012 Page 5
http://dx.doi.org/10.1257/mac.4.3.153
Page 6
Optimal Risk Sharing and Borrowing Constraints in a Continuous-Time Model with Limited Commitment
By Borys Grochulski and Yuzhe Zhang Journal of Economic Theory, November 2011, vol. 146, no. 6, pp. 23562388.
ndividuals, firms, and governments all face constraints on how much they can borrow, but as recent events have demonstrated, those constraints are not always enough to prevent default. When lenders have limited capacity to enforce a contract, and borrowers cannot fully commit to honoring that contract, there is risk of default. Borys Grochulski of the Richmond Fed and Yuzhe Zhang of the University of Iowa develop a model to determine the optimal borrowing constraints needed to prevent default. The authors first explore an optimal long-term contracting model with a risk-neutral, fully committed lender and a risk-averse, non-committed borrower. The borrower can choose to default, but if he does, he must finance all future consumption without borrowing. As long as maintaining the lending contract allows the borrower to enjoy higher consumption (and thus higher utility) than he would be able to obtain with only his own income, the borrower will The optimal credit limit not default. Thus, the authors show that to prevent default in this model, the lender equals the value of the must increase the borrowers available consumption if the borrowers income increassurplus generated by the es to a new maximum. Otherwise, it would be preferable for the borrower to default.
Grochulski and Zhang then study a model that implements a simple trading mechanism to achieve the optimal long-term contract described in the first model. In this second model, the lender offers the borrower two trading accounts: a bank account that enables the borrower to save or borrow at a riskless interest rate, and a hedging account that allows the borrower to transfer income risk to the lender with fair-odds pricing. The hedging account has no limitations placed on it, but the bank account is subject to a borrowing limit equal to the total value of the relationship between the lender and the borrower. The authors show that in equilibrium, the borrower never defaults under these conditions. This two-account system replicates the outcomes of a system in which the borrower can fully commit to repayment. Using this model, Grochulski and Zhang are also able to calculate the optimal credit limit based on the underlying commitment friction. They conclude that the optimal credit limit equals the value of the surplus generated by the relationship between the lender and the borrower.
http://dx.doi.org/10.1016/j.jet.2011.10.007
Page 7
he search and matching model has become the workhorse framework for addressing a wide range of labor market issues in macroeconomics. However, the literature focuses almost exclusively on American and European economies. In an application to a highly competitive market economy, Thomas Lubik of the Richmond Fed analyzes aggregate labor market dynamics in Hong Kong from the perspective of a standard search and matching model. Lubik focuses on two broad empirical aspects. He studies how well the theoretical search and matching model describes the behavior of labor market variables, and he provides estimates of the structural labor market parameters as benchmarks for future research. To address these issues, Lubik develops a simple search and matching model that describes the observed outcomes of unemployment and vacancy postings arising from the interplay of workers seeking jobs and firms seeking employees. He estimates the model using Bayesian methods for quarterly data on unemployment and vacancies from 2000 to 2010. Two aspects of the papers empirical approach are rarely used in other search and matching analyses. First, Lubik assumes that the model is driven by a persistent shock to the separation rate of workers into unemployment and by a more standard productivity shock. Second, he conducts an extensive preliminary analysis to help set the priors for the Bayesian estimation, and he gathers prior information from a limited-information approach to the empirical model. The Bayesian estimation shows that the search and matching model captures aggregate labor market dynamics in Hong Kong quite well. The estimates of the structural labor market parameters are broadly consistent with analyses of U.S. data, although there are small differences for some parameters. Specifically, Hong Kongs separation rate is lower, as are the match elasticity and the match efficiency. The main driver of unemployment and vacancy fluctuations is productivity, with shocks to separations playing only a subordinate role. This suggests that the Hong Kong labor market exhibits little turnover in normal times. However, in times of economic distress, unemployment rises sharply because firms are willing to quickly accelerate firing and decelerate hiring. When the economy improves, the large pool of unemployed workers in combination with the relative inelasticity of the matching probability to labor market tightness stimulates firms hiring decisions. As Lubik notes, the exercise in this paper is constrained somewhat by the models small scale. Publicly available data are currently limited, but future research may be able to bring more data to bear on the empirical analysis. In addition, future research could extend the theoretical model, for example, by adding a monetary sector to study the transmission of monetary policy shocks to the labor market.
http://dx.doi.org/10.1111/j.1468-0106.2012.00582.x
Page 8
http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2011/11/11/0 00386194_20111111025924/Rendered/PDF/655470PUB0EPI2065712B09780821388495.pdf
Richmond Fed Research Digest 2012 Page 9
http://www.cambridge.org/aus/catalogue/catalogue.asp?isbn=9781107011885&ss=toc
Page 10
The Richmond Fed Research Digest is written and edited by Karl Rhodes and Tim Sablik and published by the Research Department at the Federal Reserve Bank of Richmond. It may be photocopied or reprinted in its entirety. Please credit the Federal Reserve Bank of Richmond and include the statement below. The Richmond Fed Research Digest summarizes articles and other works that reflect the views of their respective authors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System.
Page 11