Will the United States Experience a Sustained Boom in the Growth Rate of Labor Productivity?

Blue Planet Studio/Shutterstock

Recent articles in the business press have discussed the possibility that the U.S. economy is entering a period of higher growth in labor productivity:

“Fed’s Goolsbee Says Strong Hiring Hints at Productivity Growth Burst” (link)

“US Productivity Is on the Upswing Again. Will AI Supercharge It?” (link)

“Can America Turn a Productivity Boomlet Into a Boom?” (link)

In Macroeconomics, Chapter 16, Section 16.7 (Economics, Chapter 26, Section 26.7), we highlighted  the role of growth in labor productivity in explaining the growth rate of real GDP using the following equations. First, an identity:

Real GDP = Number of hours worked x (Real GDP/Number of hours worked),

where (Real GDP/Number of hours worked) is labor productivity.

And because an equation in which variables are multiplied together is equal to an equation in which the growth rates of these variables are added together, we have:

Growth rate of real GDP = Growth rate of hours worked + Growth rate of labor productivity

From 1950 to 2023, real GDP grew at annual average rate of 3.1 percent. In recent years, real GDP has been growing more slowly. For example, it grew at a rate of only 2.0 percent from 2000 to 2023. In February 2024, the Congressional Budget Office (CBO) forecasts that real GDP would grow at 2.0 percent from 2024 to 2034. Although the difference between a growth rate of 3.1 percent and a growth rate of 2.0 percent may seem small, if real GDP were to return to growing at 3.1 percent per year, it would be $3.3 trillion larger in 2034 than if it grows at 2.0 percent per year. The additional $3.3 trillion in real GDP would result in higher incomes for U.S. residents and would make it easier for the federal government to reduce the size of the federal budget deficit and to better fund programs such as Social Security and Medicare. (We discuss the issues concerning the federal government’s budget deficit in this earlier blog post.)

Why has growth in real GDP slowed from a 3.1 percent rate to a 2.0 percent rate? The two expressions on the right-hand side of the equation for growth in real GDP—the growth in hours worked and the growth in labor productivity—have both slowed. Slowing population growth and a decline in the average number of hours worked per worker have resulted in the growth rate of hours worked to slow substantially from a rate of 2.0 percent per year from 1950 to 2023 to a forecast rate of only 0.4 percent per year from 2024 to 2034.

Falling birthrates explains most of the decline in population growth. Although lower birthrates have been partially offset by higher levels of immigration in recent years, it seems unlikely that birthrates will increase much even in the long run and levels of immigration also seem unlikely to increase substantially in the future. Therefore, for the growth rate of real GDP to increase significantly requires increases in the rate of growth of labor productivity.

The Bureau of Labor Statistics (BLS) publishes quarterly data on labor productivity. (Note that the BLS series is for labor productivity in the nonfarm business sector rather than for the whole economy. Output of the nonfarm business sector excludes output by government, nonprofit businesses, and households. Over long periods, growth in real GDP per hour worked and growth in real output of the nonfarm business sector per hour worked have similar trends.) The following figure is taken from the BLS report “Productivty and Costs,” which was released on February 1, 2024.

Note that the growth in labor productivity increased during the last three quarters of 2023, whether we measure the growth rate as the percentage change from the same quarter in the previous year or as growth in a particular quarter expressed as anual rate. It’s this increase in labor productivity during 2023 that has led to speculation that labor productivity might be entering a period of higher growth. The following figure shows labor productivity growth, measured as the percentage change from the same quarter in the previous year for the whole period from 1950 to 2023.

The figure indicates that labor productivity has fluctuated substantially over this period. We can note, in particular, productivity growth during two periods: First, from 2011 to 2018, labor productivity grew at the very slow rate of 0.9 percent per year. Some of this slowdown reflected the slow recovery of the U.S. economy from the Great Recession of 2007-2009, but the slowdown persisted long enough to cause concern that the U.S. economy might be entering a period of stagnation or very slow growth.

Second, from 2019 through 2023, labor productivity went through very large swings. Labor productivity experienced strong growth during 2019, then, as the Covid-19 pandemic began affecting the U.S. economy, labor productivity soared through the first half of 2021 before declining for five consecutive quarters from the first quarter of 2022 through the first quarter of 2023—the first time productivity had fallen for that long a period since the BLS first began collecting the data. Although these swings were particularly large, the figure shows that during and in the immediate aftermath of recessions labor productivity typically fluctuates dramatically. The reason for the fluctuations is that firms can be slow to lay workers off at the beginning of a recession—which causes labor productivity to fall—and slow to hire workers back during the beginning of an economy recovery—which causes labor productivity to rise. 

Does the recent increase in labor productivity growth represent a trend? Labor productivity, measured as the percentage change since the same quarter in the previous year, was 2.7 percent during the fourth quarter of 2023—higher than in any quarter since the first quarter of 2021. Measured as the percentage change from the previous quarter at an annual rate, labor productivity grew at a very high average rate of 3.9 during the last three quarters of 2023. It’s this high rate that some observers are pointing to when they wonder whether growth in labor productivity is on an upward trend.

As with any other economic data, you should use caution in interpreting changes in labor productivity over a short period. The productivity data may be subject to large revisions as the two underlying series—real output and hours worked—are revised in coming months. In addition, it’s not clear why the growth rate of labor productivity would be increasing in the long run. The most common reasons advanced are: 1) the productivity gains from the increase in the number of people working from home since the pandemic, 2) businesses’ increased use of artificial intelligence (AI), and 3) potential efficiencies that businesses discovered as they were forced to operate with a shortage of workers during and after the pandemic.

To this point it’s difficult to evaluate the long-run effects of any of these factors. Wconomists and business managers haven’t yet reached a consensus on whether working from home increases or decreases productivity. (The debate is summarized in this National Bureau of Economic Research Working Paper, written by Jose Maria Barrero of Instituto Tecnologico Autonomo de Mexico, and Steven Davis and Nicholas Bloom of Stanford. You may need to access the paper through your university library.)

Many economists believe that AI is a general purpose technology (GPT), which means that it may have broad effects throughout the economy. But to this point, AI hasn’t been adopted widely enough to be a plausible cause of an increase in labor productivity. In addition, as Erik Brynjolfsson and Daniel Rock of MIT and Chad Syverson of the University of Chicago argue in this paper, the introduction of a GPT may initially cause productivity to fall as firms attempt to use an unfamiliar technology. The third reason—efficiency gains resulting from the pandemic—is to this point mainly anecdotal. There are many cases of businesses that discovered efficiencies during and immediately after Covid as they struggled to operate with a smaller workforce, but we don’t yet know whether these cases are sufficiently common to have had a noticeable effect on labor productivity.

So, we’re left with the conclusion that if the high labor productivity growth rates of 2023 can be maintained, the growth rate of real GDP will correspondingly increase more than most economists are expecting. But it’s too early to know whether recent high rates of labor productivty growth are sustainable.

The Roman Emperor Vespasian Fell Prey to the Lump-of-Labor Fallacy

Bust of the Roman Emperor Vespasian. (Photo from en.wikipedia.org.)

Some people worry that advances in artificial intelligence (AI), particularly the development of chatbots will permanently reduce the number of jobs available in the United States. Technological change is often disruptive, eliminating jobs and sometimes whole industries, but it also creates new industries and new jobs. For example, the development of mass-produced, low-priced automobiles in the early 1900s wiped out many jobs dependent on horse-drawn transportation, including wagon building and blacksmithing. But automobiles created many new jobs not only on automobile assembly lines, but in related industries, including repair shops and gas stations.

Over the long run, total employment in the United States has increased steadily with population growth, indicating that technological change doesn’t decrease the total amount of jobs available. As we discuss in Microeconomics, Chapter 16 (also Economics, Chapter 16), fears that firms will permanently reduce their demand for labor as they increase their use of the capital that embodies technological breakthroughs, date back at least to the late 1700s in England, when textile workers known as Luddites—after their leader Ned Ludd—smashed machinery in an attempt to save their jobs. Since that time, the term Luddite has described people who oppose firms increasing their use of machinery and other capital because they fear the increases will result in permanent job losses.

Economists believe that these fears often stem from the lump-of-labor fallacy, which holds that there is only a fixed amount of work to be performed in the economy. So the more work that machines perform, the less work that will be available for people to perform. As we’ve noted, though, machines are substitutes for labor in some uses—such as when chatbot software replace employees who currently write technical manuals or computer code—they are also complements to labor in other jobs—such as advising firms on how best to use chatbots. 

The lump-of-labor fallacy has a long history, probably because it seems like common sense to many people who see the existing jobs that a new technology destroys, without always being aware of the new jobs that the technology creates. There are historical examples of the lump-of-labor fallacy that predate even the original Luddites.

For instance, in his new book Pax: War and Peace in Rome’s Golden Age, the British historian Tom Holland (not to be confused with the actor of the same name, best known for portraying Spider-Man!), discusses an account by the ancient historian Suetonius of an event during the reign of Vespasian who was Roman emperor from 79 A.D. to 89 A.D. (p. 201):

“An engineer, so it was claimed, had invented a device that would enable columns to be transported to the summit of the [Roman] Capitol at minimal cost; but Vespasian, although intrigued by the invention, refused to employ it. His explanation was a telling one. ‘I have a duty to keep the masses fed.’”

Vespasian had fallen prey to the lump-of-labor fallacy by assuming that eliminating some of the jobs hauling construction materials would reduce the total number of jobs available in Rome. As a result, it would be harder for Roman workers to earn the income required to feed themselves.

Note that, as we discuss in Macroeconomics, Chapters 10 and 11 (also Economics, Chapter 20 and 21), over the long-run, in any economy technological change is the main source of rising incomes. Technological change increases the productivity of workers and the only way for the average worker to consume more output is for the average worker to produce more output. In other words, most economists agree that the main reason that the wages—and, therefore, the standard of living—of the average worker today are much higher than they were in the past is that workers today are much more productive because they have more and better capital to work with.

Although the Roman Empire controlled most of Southern and Western Europe, the Near East, and North Africa for more than 400 years, the living standard of the average citizen of the Empire was no higher at the end of the Empire than it had been at the beginning. Efforts by emperors such as Vespasian to stifle technological progress may be part of the reason why. 

Claudia Goldin Wins the Nobel Prize in Economics

Claudia Goldin (Photo from Goldin’s web page at havard.edu.)

Claudia Goldin, the Henry Lee Professor of Economics at Harvard, has been awarded the 2023 Nobel Prize in Economic Sciences. Goldin’s research is wide-ranging, with a focus on the economic history of women and on gender disparities in wages and employment. She received her PhD from the University of Chicago in 1972 for a thesis that was published in 1976 as Urban Slavery in the American South, 1820 to 1860: A Quantitative History. Her thesis adviser, Robert Fogel, was awarded the Nobel Prize in 1993 for his work in economic history. He shared the prize that year with Douglas North of Washington University in St. Louis. Goldin’s work on economic history contributed to the cliometric revolution, which involves the application of theoretical models and econometric methods to the study of historical issues.  At the time of the award to Fogel and North, Goldin discussed their research and the cliometric revolution here.

Goldin’s pioneering and influential research on the economic history of women was the basis for her 1990 book Understanding the Gender Gap: An Economic History of American Women. The themes of that book were expanded on in 2021 in Career & Family: Women’s Century-Long Journey toward Equity, and in her forthcoming An Evolving Force: A History of Women in the Economy.

In research with Lawrence Katz, also a professor of economics at Harvard, Goldin has explored how technological change and educational attainment have affected income inequality, particularly the wage premium skilled workers receive. Goldin and Katz summarized their findings in 2008 in the influential book, The Race between Education and Technology.

The wide scope of Goldin’s research can be seen by reviewing her curriculum vitae, which can be found here. The announcement by the Nobel committee can be found here.

Antitrust Policy and Monopsony Power

Photo from the New York Times.

As we discuss in Microeconomics and Economics, Chapter 15, Section 15.6, the U.S. Department of Justice’s Antitrust Division and the Federal Trade Commission have merger guidelines that they typically follow when deciding whether to oppose a merger between two firms in the same industry—these mergers are called horizontal mergers. The guidelines are focused on the effect a potential merger would have on market price of the industry’s output. We know that if the price in a market increases, holding everything else constant, consumer surplus will decline and the deadweight loss in the market will increase. But, as we note in Chapter 15, if a merger increases the efficiency of the merged firms, the result can be a decrease in costs that will lower the price, increase consumer surplus, and reduce the deadweight loss. 

The merger guidelines focus on the effect of two firms combining on the merged firms’ market power in the output market.  For example, if two book publishers merge, what will be the effect on the price of books? But what if the newly merged firm gains increased market power in input markets and uses that power to force its suppliers to accept lower prices? For example, if two book publishers merge will they be able to use their market power to reduce the royalties they pay to writers? The federal antitrust authorities have traditionally considered market power in the output market—sometimes called monopoly power—but rarely considered market power in the input market—sometimes called monopsony power.

In Chapter 16, Section 16.6, we note that a pure monopsony is the sole buyer of an input, a rare situation that might occur in, for example, a small town in which a lumber mill is the sole employer. A monopoly in an output market in which a single firm is the sole seller of a good is also rare, but many firms have some monopoly power because they have the ability to charge a price higher than marginal cost. Similarly, although monopsonies in input markets are rare, some firms may have monopsony power because they have the ability to pay less than the competitive equilibrium price for an input. For example, as we noted in Chapter 14, Section 14.4, Walmart is large enough in the market for some products, such as detergent and toothpaste, that it is able to insist that suppliers give it discounts below what would otherwise be the competitive price.

Monopsony power was the key issue involved in November 2021 when the Justice Department filed an antitrust lawsuit to keep the book publisher Penguin Random House from buying Simon & Schuster, another one of the five largest publishers. The merged firm would account for 31 percent of books published in the U.S. market. The lawsuit alleged that buying Simon & Schuster would allow “Penguin Random House, which is already the largest book publisher in the world, to exert outsized influence over which books are published in the United States and how much authors are paid for their work.”

We’ve seen that when two large firms propose a merger, they often argue that the merger will allow efficiency gains large enough to result in lower prices despite the merged firm having increased monopoly power. In August 2022, during the antitrust trial over the Penguin–Simon & Schuster merger, Markus Dohle, the CEO of Penguin made a similar argument, but this time in respect to an input market—payments to book authors. He argued that because Penguin had a much better distribution network, sales of Simon & Schuster books would increase, which would lead to increased payments to authors. Authors would be made better off by the merger even though the newly merged firm would have greater monopsony power. Penguin’s attorneys also argued that the market for book publishing was larger than the Justice Department believed. They argued that the relevant book market included not just the five largest publishers but also included Amazon and many medium and small publishers “all capable of competing for [the right to publish] future titles from established and emerging authors.”  The CEO of Hachette Book Group, another large book publisher, disagreed, arguing at the trial that the merger between Penguin and Simon & Schuster would result in lower payments to authors. 

The antitrust lawsuit against Penguin and Simon & Schuster was an example of the more aggressive antitrust policy being pursued by the Biden administration. (We discussed the Biden administration’s approach to antitrust policy in this earlier blog post.) An article in the New York Times quoted a lawyer for a legal firm that specializes in antitrust cases as arguing that the lawsuit against Penguin and Simon & Schuster was unusual in that the lawsuit “declines to even allege the historically key antitrust harm—increased prices.” The outcome of the Justice Department’s lawsuit against Penguin and Simon & Schuster may provide insight into whether federal courts will look favorably on the Biden administration’s more aggressive approach to antitrust policy. 

Sources: Jan Wolfe, “Penguin Random House CEO Defends Publishing Merger at Antitrust Trial,” Wall Street Journal, August 4, 2022;  David McCabe, “Justice Dept. and Penguin Random House’s Sparring over Merger Has Begun,” New York Times, August 1, 2022; Eduardo Porter, “A New Legal Tactic to Protect Workers’ Pay,” New York Times, April 14, 2022; Janet H. Cho and Karishma Vanjani, “Justice Department Seeks to Block Penguin Random House Buy of Viacom’s Simon & Schuster,” barrons.com, November 2, 2021; United States Department of Justice, “Justice Department Sues to Block Penguin Random House’s Acquisition of Rival Publisher Simon & Schuster,” justice.gov, November 2, 2021; 

What Is the Economic Payoff to Free Community College?

Northampton County Community College in Pennsylvania

People who graduate from college earn significantly more and have lower unemployment rates than do people who have only a high school degree. (We discuss this issue in Chapter 16, Section 16.3.) As the following table shows, in 2020, people with a bachelor’s degree had average weekly earnings of $1,305 and had an unemployment rate of 5.5 percent, while people who had only a high school degree had weekly earnings of $781 and an unemployment rate of 9.0 percent. People with an associate’s degree from a two-year community college were in between the other two groups.

Educational attainmentMedian usual weekly earnings ($)Unemployment rate (%)
Doctoral degree1,8852.5
Professional degree1,8933.1
Master’s degree1,5454.1
Bachelor’s degree1,3055.5
Associate’s degree9387.1
Some college, no degree8778.3
High school diploma7819
Less than a high school diploma61911.7
Total1,0297.1

Not surprisingly, attempts to reduce income inequality have often included plans to increase the number of low-income people who attend college. In 2021, President Joe Biden proposed a plan that would cover the tuition of most first-time students attending community college in states that agreed to participate in the plan. The plan was estimated to cost $109 billion over 10 years and would potentially cover 5.5 million students. As of late 2021, it appeared unlikely that Congress would enact the plan but similar plans have been proposed in the past making the economic payoff to free community college an important policy issue.

There are many federal, state, and local programs that already cover some or all of the tuition and fees for some community college students. The state and local programs are often called “promise programs.” The name refers to what is usually considered the first such program, which began in Kalamazoo, Michigan in 2005. There are now more than 200 promise programs in 41 states. The programs differ in the percentage of a student’s tuition and fees that are covered and on which students are eligible. The plan proposed by President Biden would have differed from existing programs in being more comprehensive—covering community college tuition for all high school graduates who had not previously attended college.

We can’t offer here a full assessment of the economic effects that might result from a  nationwide free community college, but we can briefly summarize some of the very large number of economic studies of community colleges. Hieu Nguyen of Illinois Wesleyan University has studied the effects of the Tennessee Promise program, which beginning in fall 2015, has covered that part of the tuition not covered by federal or other state programs for any Tennessee high school graduate who enrolls in a public two-year college in the state. Nguyen’s analysis finds that the program had a very large effect, increasing “full-time first-time undergraduate enrollment at the state’s community colleges by at least 40%.” Some policymakers and economists are concerned that promise programs may divert some students into attending community college who would otherwise have enrolled in a four-year college. As the table shows, on average, people who graduate from a four-year college have higher incomes and lower unemployment rates than people who graduate from a two-year college. Nguyen analysis indicates that this problem was not significant in Tennessee. He finds that the Tennessee Promise program resulted in only a 2 percent decline in enrollment in Tennessee’s public four-year colleges. 

Oded Gurantz of the University of Missouri studied the Oregon Promise program, which like the Tennessee Promise program, covers the part of tuition at Oregon two-year colleges not covered by federal or other state programs with the difference that it awards $1,000 per year to students whose tuition is completely covered from other sources. The program began in 2016. Guarantz finds that although the program did increase the enrollment in two-year colleges by four to five percent, initially nearly all of the increase was the result of students shifting away from four-year colleges. He finds that in later years the program was effective in increasing enrollment in both two-year and four-year colleges. 

Elizabeth Bell of Miami University finds that a narrowly focused Oklahoma program that covers tuition and fees at a single two-year college—Tulsa Community College—succeeds in substantially increasing the number of students who transfer to a four-year college and, to a lesser extent, increasing the fraction of students who receive degrees from four-year colleges.

A number of researchers have studied the returns to individuals from attending community college. As with the returns from four-year colleges, choice of major can be very important. Michael Grosz of the Federal Trade Commission found that, controlling for the individual characteristics of students, receiving a degree in nursing from a California community college “increases earnings by 44 percent and the probability of working in the health care industry by 19 percentage points.”  

Jack Mountjoy of the University of Chicago has compiled a large data set for the state of Texas that links enrollment in all public and private universities in the state to students’ earnings later in life. Mountjoy uses the data to analyze the effects of community college on upward mobility. The upward mobility of students who attend a community college is increased if the students would otherwise not have attended college but hindered if they attend a community college rather a four-year college they were qualified to attend. Mountjoy notes that survey evidence indicates that 81 percent of students enrolling in two-year colleges intend to ultimately receive a degree from a four-year college, buy only 33 percent transfer to a four-year college within six years and only 14 percent ultimately earn a bachelor’s degree.

Because so few students who enroll in a two-year college ultimately receive a degree from a four-college, promise programs run the risk of actually reducing the number of students who receive bachelor’s degrees by diverting some students from four-year colleges to two-year colleges. Mountjoy’s analysis of the Texas data indicates that “broad expansions of 2-year college access are likely to boost the upward mobility of students ‘democratized’ into higher education from non-attendance, but more targeted policies that avoid significant 4-year diversion may generate larger net benefits.” He notes that for low-income students, “2-year college enrollment may involve other labor market benefits … beyond modest increases in formal educational attainment, such as better access to employer networks, short course sequences teaching readily-employable skills, and improved job matching.”

Mountjoy’s results reinforce a point made by some other economists and policymakers: Programs that provide free community college for all students may be a less effective way for governments to spend scarce funds than are programs that focus on boosting the ability of low-income students to attend and complete both two-year and four-year colleges. Many low-income students face barriers beyond difficulty affording tuition, including the lost earnings from time spent in class and studying rather than working, child care expenses, and paying for textbooks and other learning materials. In addition, providing free tuition at community colleges to all students may end up subsidizing college attendance for some middle and high-income students who would have attended college without the subsidy and may provide an incentive for some students to enroll in two-year colleges who would have been better off enrolling in four-year colleges.

Sources:  Julie Bykowicz and Douglas Belkin, “Why Biden’s Plan for Free Community College Likely Will Be Cut From Budget Package,” wsj.com, October 21, 2021; Michel Grosz, “The Returns to a Large Community College Program: Evidence from Admissions Lotteries,” American Economic Journal: Economic Policy; Vol. 12, No. 1, February 2020; 226-253; Hieu Nguyen, “Free College? Assessing Enrollment Responses to the Tennessee Promise Program,” Labour Economics, Vol. 66, October 2020; Oded Guarantz, “What Does Free Community College Buy? Early Impacts from the Oregon Promise,” Journal of Policy Analysis and Management, Vol. 39, No. 1 October 2020, pp. 11-35; Elizabeth Bell, “Does Free Community College Improve Student Outcomes? Evidence From a Regression Discontinuity Design,” Educational Evaluation and Policy Analysis, Vol. 43, No. 2, June 2021, pp. 329-350; Michael Grosz, “The Returns to a Large Community College Program: Evidence from Admissions Lotteries,” American Economic Journal: Economic Policy, Volume 12, No. 1, February 2020; pp. 225-253; Jack Mountjoy, “Community Colleges and Upward Mobility,” National Bureau of Economic Research, Working Paper 29254, September 2021; Allison Pohle, “What Does Biden’s Plan for Families Mean for Community College, Pre-K?” wsj.com, April 28, 2021; Meredith Billings, “Understanding the Design of College Promise Programs, and Where to Go from Here,” brookings.edu, September 18, 2018; and U.S. Bureau of Labor Statistics, “Employment Projections: Education Pays,” Table 5.1, September 8, 2021. 

Card, Angrist, and Imbens Win Nobel Prize in Economics

David Card
Joshua Angrist
Guido Imbens

   David Card of the University of California, Berkeley; Joshua Angrist of the Massachusetts Institute of Technology; and Guido Imbens of Stanford University shared the 2021 Nobel Prize in Economics (formally, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel). Card received half of the prize of 10 million Swedish kronor (about 1.14 million U.S. dollars) “for his empirical contributions to labor economics,” and Angrist and Imbens shared the other half “for their methodological contributions to the analysis of causal relationships.” (In the work for which they received the prize, all three had collaborated with the late Alan Krueger of Princeton University. Card was quoted in the Wall Street Journal as stating that: “I’m sure that if Alan was still with us that he would be sharing this prize with me.”)

The work of the three economists is related in that all have used natural experiments to address questions of economic causality. With a natural experiment, economists identify some variable of interest—say, an increase in the minimum wage—that has changed for one group of people—say, fast-food workers in one state—while remaining unchanged for another similar group of people—say, fast-food workers in a neighboring state. Researchers can draw an inference about the effects of the change by looking at the difference between the outcomes for the two groups. In this example, the difference between changes in employment at fast-food restaurants in the two states can be used to measure the effect of an increase in the minimum wage.

Using natural experiments is an alternative to the traditional approach that had dominated empirical economics from the 1940s when the increased availability of modern digital computers made it possible to apply econometric techniques to real-world data. With the traditional approach to empirical work, economists would estimate structural models to answer questions about causality. So, for instance, a labor economist might estimate a model of the demand and supply of labor to predict the effect of an increase in the minimum wage on employment.

Over the years, many economists became dissatisfied with using structural models to address questions of economic causality. They concluded that the information requirements to reliably estimate structural models were too great. For instance, structural models require assumptions about the functional form of relationships, such as the demand for labor, that are not inferable directly from economic theory. Theory also did not always identify all variables that should be included in the model. Gathering data on the relevant variables was sometimes difficult. As a result, answers to empirical questions, such as the employment effects of the minimum wage, differed substantially across studies. In such cases, policymakers began to see empirical economics as an unreliable guide to economic policy.

In a famous study of the effect of the minimum wage on employment published in 1994 in the American Economic Review, Card and Krueger pioneered the use of natural experiments.  In that study, Card and Krueger analyzed the effect of the minimum wage on employment in fast-food restaurants by comparing what happened to employment in New Jersey when it raised the state minimum wage from $4.25 to $5.05 per hour with employment in eastern Pennsylvania where the minimum wage remained unchanged.  They found that, contrary to the usual analysis that increases in the minimum wage lead to decreases in the employment of unskilled workers, employment of fast-food workers in New Jersey actually increased relative to employment of fast-food workers in Pennsylvania. 

The following graphic from Nobel Prize website summarizes the study. (Note that not all economists have accepted the results of Card and Krueger’s study. We briefly summarize the debate over the effects of the minimum wage in Chapter 4, Section 4.3 of our textbook.)

Drawing inferences from natural experiments is not as straightforward as it might seem from our brief description. Angrist and Imbens helped develop the techniques that many economists rely on when analyzing data from natural experiments.

Taken together, the work of these three economists represent a revolution in empirical economics. They have provided economists with an approach and with analytical techniques that have been applied to a wide range of empirical questions. 

For the annoucement from the Nobel website click HERE.

For the article in the Wall Street Journal on the prize click HERE (note that a subscription may be required).

For the orignal Card and Krueger paper on the minimum wage click HERE.

For David Card’s website click HERE.

For Joshua Angrist’s website click HERE.

For Guido Imbens’s website click HERE.

How Do firms Evaluate New Hires? The Curious Case of NFL Quarterbacks

As we discuss in Chapter 16, the demand for labor depends on the marginal product of labor. In our basic model of a competitive labor market we assume that all workers have the same ability, skills, and training. Firms can hire as many workers as they would like at the market equilibrium wage. Because, by assumption, all workers have the same abilities, firms don’t have to worry about whether one person might be less able or willing to perform the assigned work than another person.

In reality, we know that most firms face more complicated hiring decisions. Even for a job, such as being a cashier in supermarket, that most people can be quickly trained to do, workers differ in how well they carry out their tasks and whether they can be relied on to regularly show up for work and to treat customers politely.

When hiring workers, firms face a problem of asymmetric information: Workers know more about whether they intend to work hard than firms know. Even for applicants who have a work history, a firm may have difficulty discovering how well or how poorly the applicant performed his or her duties in earlier jobs. In responding to inquiries from other firms about a job applicant, firms are rarely willing to do more than confirm that a person has worked at the firm because they are afraid that reporting anything negative about the person—even if true—might expose the firm to a law suit. In Section 16.5, we discuss the field of personnel economics, which includes the study of how firms design compensation policies that attempt to ensure that workers have an incentive to work hard.

When hiring someone entering the labor market, such as a new college graduate, firms have a particular problem in gauging the likely performance of a worker who may have no job history. In this case, there may not be a problem of asymmetric information because the worker may also be uncertain as to how well he or she will be able to perform the job, particularly if the worker has not previously held a full-time job in that field. When hiring new college graduates, firms may rely on an applicant’s college grades, the reputation of the applicant’s college, and the applicant’s scores on standardized test. Some firms have also developed their own tests to measure an applicant’s cognitive skills, knowledge relevant to the position applied for, and even psychological temperament. Some technology firms and investment banks ask applicants to complete demanding problems that may be unrelated to either technology or banking but can provide insight into whether the applicant has the cognitive ability and temperament to quickly complete complicated tasks.

Teams in the National Football League (NFL) face an interesting problem when hiring new players, particularly those playing the position of quarterback. College football players hoping to play professional football enter the NFL draft in which each of the 32 teams select players in eight rounds, with the selections being in reverse order of the teams’ records during the previous football season. There is often a substantial gap between an athlete’s ability to be successful playing college football and his ability to be successful in the NFL. As a result, many players who are stars in college are unable to succeed as professionals.

The position of quarterback is usually thought to be the most difficult to succeed at. Many highly-regarded college quarterbacks fail to do well in the NFL. Teams typically settle on one player as their starting quarterback who will play most of the time. But teams also have one or two backups. Sometimes the backups are older, former starters on other teams, but often they are players chosen in the draft of college players. It’s very difficult to judge how well a quarterback is likely to perform except by seeing him play in a game. Players who perform well in practice often don’t play well in games. As a result, a backup quarterback may be drafted and, if the starting quarterback on his team remains healthy and is effective, earn a nice salary from year to year without actually playing in many games. If a team’s starting quarterback is injured or is ineffective, the backup quarterback may play in several games during a season.

If the backup shows himself to be an effective player, the team may decide to retain him as the starter—with a substantial increase in salary. But given the difficulty of playing the position of quarterback, a more likely outcome is that the backup plays poorly and the team decides to draft another backup quarterback the following year.

The result is an odd situation: The more that a backup quarterback plays in games, often the less likely he is to keep his job. And the less that a backup quarterback plays, the more likely he is to keep his job. Or as one NFL head coach put it: “Backups who don’t play a lot tend to have long NFL careers, while those who are exposed [by actually] playing … have shorter careers.”

This outcome is an extreme example of the difficulty firms sometimes have in measuring how well new hires are likely to perform in their jobs.

Source for quote: Sportswriter David Lombardi on Twitter, quoting San Francisco 49ers’ head coach Kyle Shanahan, December 14, 2020.

How the Effects of the Covid-19 Recession Differed Across Business Sectors and Income Groups

The recession that resulted from the Covid-19 pandemic affected most sectors of the U.S. economy, but some sectors of the economy fared better than others. As a broad generalization, we can say that online retailers, such as Amazon; delivery firms, such as FedEx and DoorDash; many manufacturers, including GM, Tesla, and other automobile firms; and firms, such as Zoom, that facilitate online meetings and lessons, have done well. Again, generalizing broadly, firms that supply a service, particularly if doing so requires in-person contact, have done poorly. Examples are restaurants, movie theaters, hotels, hair salons, and gyms.

The following figure uses data from the Federal Reserve Economic Data (FRED) website (fred.stlouisfed.org) on employment in several business sectors—note that the sectors shown in the figure do not account for all employment in the U.S. economy. For ease of comparison, total employment in each sector in February 2020 has been set equal to 100.

Employment in each sector dropped sharply between February and April as the pandemic began to spread throughout the United States, leading governors and mayors to order many businesses and schools closed. Even in areas where most businesses remained open, many people became reluctant to shop in stores, eat in restaurants, or exercise in gyms. From April to November, there were substantial employment gains in each sector, with employment in all goods-producing industries and employment in manufacturing (a subcategory of goods-producing industries) in November being just 5 percent less than in February. Employment in professional and business services (firms in this sector include legal, accounting, engineering, legal, consulting, and business software firms), rose to about the same level, but employment in all service industries was still 7 percent below its February level and employment in restaurants and bars was 17 percent below its February level.

Raj Chetty of Harvard University and colleagues have created the Opportunity Insights website that brings together data on a number of economic indicators that reflect employment, income, spending, and production in geographic areas down to the county or, for some cities, the ZIP code level. The Opportunity Insights website can be found HERE.

In a paper using these data, Chetty and colleagues find that during the pandemic “spending fell primarily because high-income households started spending much less.… Spending reductions were concentrated in services that require in-person physical interaction, such as hotels and restaurants …. These findings suggest that high-income households reduced spending primarily because of health concerns rather than a reduction in income or wealth, perhaps because they were able to self-isolate more easily than lower-income individuals (e.g., by substituting to remote work).”

As a result, “Small business revenues in the highest-income and highest-rent ZIP codes (e.g., the Upper East Side of Manhattan) fell by more than 65% between March and mid-April, compared with 30% in the least affluent ZIP codes. These reductions in revenue resulted in a much higher rate of small business closure in affluent areas within a given county than in less affluent areas.” As the revenues of small businesses declined, the businesses laid off workers and sometimes reduced the wages of workers they continued to employ. The employees of these small businesses, were typically lower- wage workers. The authors conclude from the data that: “Employment for high- wage workers also rebounded much more quickly: employment levels for workers in the top wage quartile [the top 20 percent of wages] were almost back to pre-COVID levels by the end of May, but remained 20% below baseline for low-wage workers even as of October 2020.”

The paper, which goes into much greater detail than the brief summary just given, can be found HERE.

Census Bureau Releases Results from the American Community Survey

Each year the U.S. Census Bureau conducts the American Community Survey (ACS) by surveying 3.5 million households on a wide range of questions including their income, their employment, their ethnicity, their marital status, how large their house or apartment is, and how many cars they own. The ACS is the most reliable source of data on these issues and is widely used by economists, business managers, and government policy makers. The data for 2019 and for the five-year period 2015-2019 were released on December 10. You can learn more about the survey and explore the data on the ACS website.

The ACS provides data on increases in income over time by different ethnic groups. This news article discusses the result that between 2005 and 2019, the incomes of Asian American grew the fastest, followed by the incomes of Hispanics, the incomes of non-Hispanic whites, and the incomes of African Americans.

Does Automation Lead to Permanent Job Losses?

This post on the Federal Reserve Bank of St. Louis’s Page One blog discusses how the belief that automation can lead to permanent job losses is an example of the “lump of labor” fallacy. Click HERE to read the article.

The post refers to the circular-flow diagram, which we discuss in Chapter 2 and in Chapter 18 in the textbook. We discuss the effects of automation and robots on the labor market in Chapter 16.