To most thoughtful people, unemployment benefits embody a painful trade-off. They are the mark of a civilized society, clubbing together to provide assistance to those in need. They are also, regrettably, an incentive to remain unemployed. At their worst, unemployment benefits pay people to watch daytime television. They are particularly pernicious if the skills of the jobless decay and unemployment becomes unemployability. Yet, at their best, they are a life-saver.
In balancing these two effects, it’s hardly surprising that different societies have adopted very different systems. According to the Organization for Economic Co-operation and Development, member governments spent an average of 0.75 percent of gross domestic product on unemployment benefits in 2006. France spent nearly twice this sum, and Germany almost three times as much, while the United States spent one-third of the average, and the United Kingdom just more than a quarter. Germany spent more than 10 times as much as the United Kingdom, relative to GDP.
Paying people to stay out of work is an example of that increasingly familiar phenomenon “moral hazard.” But moral hazard can be more fearsome in the theorist’s imagination than it is in reality. Do unemployment benefits really encourage people to duck work? Unfortunately, the evidence suggests that they do: Increases in benefits have repeatedly been linked with longer periods between jobs.
But new research from Raj Chetty, a young Berkeley economist, suggests that moral hazard may not be why more generous benefits seem to lead to more unemployment. Chetty realized that unemployment benefits do not merely pay people to stay out of work; they also protect them from having to rush into an unsuitable job. It is nothing to celebrate if unemployed engineers cannot afford to spend three months finding a job for which they are qualified but are forced to work as real estate agents to put food on the table. A longer gap between jobs is sometimes preferable.
This is an interesting theory, but distinguishing between moral hazard and the effect of having some cash on hand is tough. Chetty looked at sharp breaks in the unemployment-insurance rules in the United States, comparing one state’s rules with another’s or examining moments when the rules changed. One suggestive finding is that when unemployment insurance becomes more generous, not everybody lingers on benefits. The median job-loser in the United States has $200 when he loses his job and is unlikely to be able to borrow much, but some people have plenty of money in the bank when they find themselves unemployed. Chetty found that those with savings do not take any longer to find a job when paid more generous benefits, while those with little in the kitty when they lose their jobs do. This suggests that those without their own cash reserves are using unemployment benefits to buy themselves time to find the right job.
Of course, there may be many differences between people with savings and those without, so this merely suggests that Chetty is on to something. But there are other clues—for instance, Chetty and two colleagues looked at the system in Austria, where severance pay is due to anyone employed for more than three years. By looking at—for example—a factory closure in which lots of staff are fired simultaneously, they could treat severance pay almost as a randomized experiment. Those lucky enough to get severance pay spent more time looking for a new job, despite the fact that severance pay provides no direct incentive to stay out of work.
Unemployment benefits do encourage unemployment in the short term, but that may not be a bad thing.