Is it 1987 for AI?

Image generated by ChatGPT 5 of a 1981 IBM personal computer.

The modern era of information technology began in the 1980s with the spread of personal computers. A key development was the introduction of the IBM personal computer in 1981. The Apple II, designed by Steve Jobs and Steve Wozniak and introduced in 1977, was the first widely used personal computer, but the IBM personal computer had several advantages over the Apple II. For decades, IBM had been the dominant firm in information technology worldwide. The IBM System/360, introduced in 1964, was by far the most successful mainframe computer in the world. Many large U.S. firms depended on IBM to meet their needs for processing payroll, general accounting services, managing inventories, and billing.

Because these firms were often reliant on IBM for installing, maintaining, and servicing their computers, they were reluctant to shift to performing key tasks with personal computers like the Apple II. This reluctance was reinforced by the fact that few managers were familiar with Apple or other early personal computer firms like Commodore or Tandy, which sold the TRS-80 through Radio Shack stores. In addition, many firms lacked the technical staffs to install, maintain, and repair personal computers. Initially, it was easier for firms to rely on IBM to perform these tasks, just as they had long been performing the same tasks for firms’ mainframe computers.

By 1983, the IBM PC had overtaken the Apple II as the best-selling personal computer in the United States. In addition, IBM had decided to rely on other firms to supply its computer chips (Intel) and operating system (Microsoft) rather than develop its own proprietary computer chips and operating system. This so-called open architecture made it possible for other firms, such as Dell and Gateway, to produce personal computers that were similar to IBM’s. The result was to give an incentive for firms to produce software that would run on both the IBM PC and the “clones” produced by other firms, rather than produce software for Apple personal computers. Key software such as the spreadsheet program Lotus 1-2-3 and word processing programs, such as WordPerfect, cemented the dominance of the IBM PC and the IBM clones over Apple, which was largely shut out of the market for business computers.

As personal computers began to be widely used in business, there was a general expectation among economists and policymakers that business productivity would increase. Productivity, measured as output per hour of work, had grown at a fairly rapid average annual rate of 2.8 percent between 1948 and 1972. As we discuss in Macroeconomics, Chapter 10 (Economics, Chapter 20 and Essentials of Economics, Chapter 14) rising productivity is the key to an economy achieving a rising standard of living. Unless output per hour worked increases over time, consumption per person will stagnate. An annual growth rate of 2.8 percent will lead to noticeable increases in the standard of living.

Economists and policymakers were concerned when productivity growth slowed beginning in 1973. From 1973 to 198o, productivity grew at an annual rate of only 1.3 percent—less than half the growth rate from 1948 to 1972. Despite the widespread adoption of personal computers by businesses, during the 1980s, the growth rate of productivity increased only to 1.5 percent. In 1987, Nobel laureate Robert Solow of MIT famously remarked: “You can see the computer age everywhere but in the productivity statistics.” Economists labeled Solow’s observation the “productivity paradox.” With hindsight, it’s now clear that it takes time for businesses to adapt to a new technology, such as personal computers. In addition, the development of the internet, increases in the computing power of personal computers, and the introduction of innovative software were necessary before a significant increase in productivity growth rates occurred in the mid-1990s.

Result when ChatGPT 5 is asked to create an image illustrating ChatGPT

The release of ChatGPT in November 2022 is likely to be seen in the future as at least as important an event in the evolution of information technology as the introduction of the IBM PC in August 1981. Just as with personal computers, many people have been predicting that generative AI programs will have a substantial effect on the labor market and on productivity.

In this recent blog post, we discussed the conflicting evidence as to whether generative AI has been eliminating jobs in some occupations, such as software coding. Has AI had an effect on productivity growth? The following figure shows the rate of productivity growth in each quarter since the fourth quarter of 2022. The figure shows an acceleration in productivity growth beginning in the fourth quarter of 2023. From the fourth quarter of 2023 through the fourth quarter of 2024, productivity grew at an annual rate of 3.1 percent—higher than during the period from 1948 to 1972. Some commentators attributed this surge in productivity to the effects of AI.

However, the increase in productivity growth wasn’t sustained, with the growth rate in the first half of 2025 being only 1.3 percent. That slowdown makes it more likely that the surge in productivity growth was attributable to the recovery from the 2020 Covid recession or was simply an example of the wide fluctuations that can occur in productivity growth. The following figure, showing the entire period since 1948, illustrates how volatile quarterly rates of productivity growth are.

How large an effect will AI ultimately have on the labor market? If many current jobs are replaced by AI is it likely that the unemployment rate will soar? That’s a prediction that has often been made in the media. For instance, Dario Amodei, the CEO of generative AI firm Anthropic, predicted during an interview on CNN that AI will wipe out half of all entry level jobs in the U.S. and cause the unemployment rate to rise to between 10% and 20%.  

Although Amodei is likely correct that AI will wipe out many existing jobs, it’s unlikely that the result will be a large increase in the unemployment rate. As we discuss in Macroeconomics, Chapter 9 (Economics, Chapter 19 and Essentials of Economics, Chapter 13) the U.S. economy creates and destroys millions of jobs every year. Consider, for instance, the following table from the most recent “Job Openings and Labor Turnover” (JOLTS) report from the Bureau of Labor Statistics (BLS). In June 2025, 5.2 million people were hired and 5.1 million left (were “separated” from) their jobs as a result of quitting, being laid off, or being fired.

Most economists believe that one of the strengths of the U.S. economy is the flexibility of the U.S. labor market. With a few exceptions, “employment at will” holds in every state, which means that a business can lay off or fire a worker without having to provide a cause. Unionization rates are also lower in the United States than in many other countries. U.S. workers have less job security than in many other countries, but—crucially—U.S. firms are more willing to hire workers because they can more easily lay them off or fire them if they need to. (We discuss the greater flexibility of U.S. labor markets in Macroeconomics, Chapter 11 (Economics, Chapter 21).)

The flexibility of the U.S. labor market means that it has shrugged off many waves of technological change. AI will have a substantial effect on the economy and on the mix of jobs available. But will the effect be greater than that of electrification in the late nineteenth century or the effect of the automobile in the early twentieth century or the effect of the internet and personal computing in the 1980s and 1990s? The introduction of automobiles wiped out jobs in the horse-drawn vehicle industry, just as the internet has wiped out jobs in brick-and-mortar retailing. People unemployed by technology find other jobs; sometimes the jobs are better than the ones they had and sometimes the jobs are worse. But economic historians have shown that technological change has never caused a spike in the U.S. unemployment rate. It seems likely—but not certain!—that the same will be true of the effects of the AI revolution. 

Which jobs will AI destroy and which new jobs will it create? Except in a rough sense, the truth is that it is very difficult to tell. Attempts to forecast technological change have a dismal history. To take one of many examples, in 1998, Paul Krugman, later to win the Nobel Prize, cast doubt on the importance of the internet: “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” Krugman, Amodei and other prognosticators of the effects of technological change simply lack the knowledge to make an informed prediction because the required knowledge is spread across millions of people. 

That knowledge only becomes available over time. The actions of consumers and firms interacting in markets mobilize information that is initially known only partially to any one person. In 1945, Friedrich Hayek made this argument in “The Use of Knowledge in Society,” which is one of the most influential economics articles ever written. One of Hayek’s examples is an unexpected decrease in the supply of tin. How will this development affect the economy? We find out only by observing how people adapt to a rising price of tin: “The marvel is that … without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation are made [by the increase in the price of tin] to use the material or its products more sparingly.” People adjust to changing conditions in ways that we lack sufficient information to reliably forecast. (We discuss Hayek’s view of how the market system mobilizes the knowledge of workers, consumers, and firms in Microeconomics, Chapter 2.)

It’s up to millions of engineers, workers, and managers across the economy, often through trial and error, to discover how AI can best reduce the cost of producing goods and services or improve their quality. Competition among firms drives them to make the best use of AI. In the end, AI may result in more people or fewer people being employed in any particular occupation.  At this point, there is no way to know.

 

Has AI Damaged the Tech Job Market for Recent College Grads?

Image generated by ChatGPT 5

“Artificial intelligence is profoundly limiting some young Americans’ employment prospects, new research shows.” That’s the opening sentence of a recent opinion column in the Wall Street Journal. The columnist was reacting to a new academic paper by economists Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen of Stanford University. (See also this Substack post by Chandar that summarizes the results of their paper.) The authors find that:

“[S]ince the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment … In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor.”

The authors conclude that “our results are consistent with the hypothesis that generative AI has begun to significantly affect entry-level employment.”

About a month ago, we wrote a blog post looking at whether unemployment among young college graduates has been abnormally high in recent months.  The following figure from that post shows that over time, the unemployment rates for the youngest college graduates (the red line) is nearly always above the unemployment rate for the population as a whole (the green line), while the unemployment rate for college graduates 25 to 34 years old (the blue line) is nearly always below the unemployment rate for the population as a whole. In July of this year, the unemployment rate for the population as a whole was 4.2 percent, while the unemployment for college graduates 20 to 24 years old was 8.5 percent, and the unemployment rate for college graduates 25 to 34 years old was 3.8 percent.

As the following figure (also reproduced from that blog post) shows, the increase in unemployment among young college graduates has been concentrated among males. Does higher male unemployment indicate that AI is eliminating jobs, such as software coding, that are disproportionately male? Data journalist John Burn-Murdoch argues against this conclusion, noting that data shows that “early-career coding employment is now tracking ahead of the [U.S.] economy.”

Another recent paper written by Sarah Eckhardt and Nathan Goldschlag of the Economic Innovation Group is also skeptical of the view that firms adopting generative AI programs is reducing employment in certain types of jobs. They use a measure developed by Edward Felton on Princeton University, and Manav Raj and Robert Seamans of New York University of how exposed particular jobs are to AI (AI Occupational Exposure (AIOE)). The following table from Eckhardt and Goldschlag’s paper shows the five most AI exposed jobs and the five least AI exposed jobs.

They divide all occupations into quintiles based on the exposure of the occupations to AI. Their key results are given in the following table, which shows that the occupations that are most exposed to the effects of AI—quintiles 4 and 5—have lower unemployment rates and higher wages than do the occupations that are least exposed to AI. 

The Brynjolfsson, Chandar, and Chen paper mentioned at the beginning of this post uses a larger data set of workers by occupation from ADP, a private firm that processes payroll data for about 25 percent of U.S. workers. Figure 1 from their paper, reproduced here, shows that employment of workers in two occupations—software developers and customer service—representative of those occupations most exposted to AI declined sharply after generative AI programs became widely available in late 2022.

They don’t find this pattern for all occupations, as shown in the following figure from their paper.

Finally, they show results by occupational quintiles, with workers ages aged 22 to 25 being hard hit in the two occupational quintiles (4 and 5) most exposted to AI. The data show total employment growth from October 2022 to July 2025 by age group and exposure to AI.

Economics blogger Noah Smith has raised an interesting issue about Brynjolfsson, Chandar, and Chen’s results. Why would we expect that the negative effect of AI on employment to be so highly concentrated among younger workers? Why would employment in the most AI exposed occupations be growing rapidly among workers aged 35 and above? Smith wonders “why companies would be rushing to hire new 40-year-old workers in those AI-exposed occupations.” He continues:

“Think about it. Suppose you’re a manager at a software company, and you realize that the coming of AI coding tools means that you don’t need as many software engineers. Yes, you would probably decide to hire fewer 22-year-old engineers. But would you run out and hire a ton of new 40-year-old engineers?

Both the papers discussed here are worth reading for their insights on how the labor market is evolving in the generative AI era. But taken together, they indicate that it is probably too early to arrive at firm conclusions about the effects of generative AI on the job market for young college graduates or other groups.

How Well Are Recent College Graduates Doing in the Labor Market?

Image generated by ChatGTP-40

A number of news stories have highlighted the struggles some recent college graduates have had in finding a job. A report earlier this year by economists Jaison Abel and Richard Deitz at the Federal Reserve Bank of New York noted that: “The labor market for recent college graduates deteriorated noticeably in the first quarter of 2025. The unemployment rate jumped to 5.8 percent—the highest reading since 2021—and the underemployment rate rose sharply to 41.2 percent.”  The authors define “underemployment” as “A college graduate working in a job that typically does not require a college degree is considered underemployed.”

The following figure shows data on the unemployment rate for people ages 20 to 24 years (red line) with a bachelor’s degree, the unemployment rate for people ages 25 to 34 years (blue line) with a bachelor’s degree, and the unemployment rate for the whole population (green line) whatever their age and level of education. (Note that the values for college graduates are for those people who have a bachelor’s degree but no advanced degree, such as a Ph.D. or an M.D.)

The figure shows that unemployment rates are more volatile for both categories of college graduates than the unemployment rate for the population as a whole. The same is true for the unemployment rates for nearly any sub-category of the unemployed lagely because the number of people included the sub-categories in the Bureau of Labor Statistics (BLS) household survey is much smaller than for the population as a whole. The figure shows that, over time, the unemployment rates for the youngest college graduates is nearly always above the unemployment rate for the population as a whole, while the unemployment rate for college graduates 25 to 34 years old is nearly always below the unemployment rate for the population as a whole. In June of this year, the unemployment rate for the population as a whole was 4.1 percent, while the unemployment for the youngest college graduates was 7.3 percent.

Why is the unemployment rate for the youngest college graduates so high? An article in the Wall Street Journal offers one explanation: “The culprit, economists say, is a general slowdown in hiring. That hasn’t really hurt people who already have jobs, because layoffs, too, have remained low, but it has made it much harder for people who don’t have work to find employment.” The following figure shows that the hiring rate—defined as the number of hires during a month divided by total employment in that month—has been falling. The hiring rate in June was 3.4 per cent, which—apart from two months at the beginning of the Covid pandemic—is the lowest rate since February 2014.

Abel and Deitz, of the New York Fed, have calculated the underemployment for new college graduates and for all college graduates. These data are shown in the following figure from the New York Fed site. The definitions used are somewhat different from the ones in the earlier figures. The definition of college graduates includes people who have advanced degrees and the definition of young college graduates includes people aged 22 years to 27 years. The data are three-month moving averages.

The data show that the underemployment rate for both recent graduates and all graduates are relatively high for the whole period shown. Typically, more than 30 percent of all college graduates and more than 40 percent of recent college graduates work in jobs in which more than 50 percent of employees don’t have college degrees. The latest underemployment rate for recent graduates is the highest since March 2022. It’s lower, though, than the rate for most of the period between the Great Recession of 2007–2009 and the Covid recession of 2020.

In a recent article, John Burn-Murdoch, a data journalist for the Financial Times, has made the point that the high unemployment rates of recent college graduates are concentrated among males. As the following figure shows, in recent months, unemployment rates among male college graduates 20 to 24 years old have been significantly higher than the unemployment rates among female college graduates. In June 2025, the unemployment rate for male recent college graduates was 9.8 percent, well above the 5.4 percent unemployment for female recent college graduates.

What explains the rise in male unemployment relative to female unemployment? Burn-Murdoch notes that, contrary to some media reports, the answer doesn’t seem to be that AI has resulted in a contraction in entry-level software coding jobs that have traditionally been held disproportionately by males. He presents data showing that “early-career coding employment is now tracking ahead of the [U.S.] economy.”

Instead he believes that the key is the continuing strong growth in healthcare jobs, which have traditionally been held disproportionately by females. The availability of these jobs has allowed women to fare better than men in an economy in which hiring rates have been relatively low.

Like most short-run trends, it’s possible that the relatively high unemployment rates experienced by recent college graduates may not continue in the long run.

DeepSeek, Nvidia, and the Effect of New Information on Stock Prices

At the close of stock trading on Friday, January 24 at 4 pm EST, Nvidia’s stock had a price of $142.62 per share. When trading reopened at 9:30 am on Monday, January 27, Nvidia’s stock price plunged to $127.51. The total value of all Nvidia’s stock (the firm’s market capitalization or market cap) dropped by $589 billion—the largest one day drop in market cap in history. The following figure from the Wall Street Journal shows movements in Nvidia’s stock price over the past six months.

What happened to cause should a dramatic decline in Nvidia’s stock price? As we discuss in Macroeconomics, Chapter 6 (Economics, Chapter 8, and Money, Banking, and the Financial System, Chapter 6), Nividia’s price of $142.62 at the close of trading on January 24—like the price of any publicly traded stock—reflected all the information available to investors about the company. For the company’s stock to have declined so sharply at the beginning of the next trading day, important new information must have become available—which is exactly what happened.

As we discussed in this blog post from last October, Nvidia has been very successful in producing state-of-the-art computer chips that power the most advanced generative artificial intelligence (AI) software. Even after Monday’s plunge in the value of its stock, Nvidia still had a market cap of nearly $3.5 trillion at the end of the day. It wasn’t news that DeepSeek, a Chinese AI company had produced AI software called R1 that was similar to ChatGTP and other AI software produced by U.S. companies. The news was that R1—the latest version of the software is called V3—appeared to be comparable in many ways to the AI software produced by U.S. firms, but had been produced by DeepSeek despite not using the state-of-the-art Nvidia chips used in those AI programs.

The Biden administration had barred export to China of the newest Navidia chips to keep Chinese firms from surging ahead of U.S. firms in developing AI. DeepSeek claimed to have developed its software using less advanced chips and have trained its software at a much lower cost than U.S. firms have been incurring to train their software. (“Training” refers to the process by which engineers teach software to be able to accurately solve problems and answer questions.) Because DeepSeek’s costs are lower, the company charges less than U.S. AI firms do to use its computer infrastructure to handle business tasks like responding to consumer inquiries.

If the claims regarding DeepSeek’s software are accurate, then AI firms may no longer require the latest Nvidia chips and may be forced to reduce the prices they can charge firms for licensing their software. The demand for electricity generation may also decline if it turns out that the demand for AI data centers, which use very large amounts of power, will be lower than expected.

But on Monday it wasn’t yet clear whether the claims being made about DeepSeek’s software were accurate. Some industry observers speculated that, despite the U.S. prohibition on exporting the latest Nvidia chips to China, DeepSeek had managed to obtain them but was reluctant to admit that it had. There were also questions about whether DeepSeek had actually spent as little as it claimed in training its software.

What happens to the price of Nvidia’s stock during the rest of the week will indicate how investors are evaluating the claims DeepSeek made about its AI software.

The Amazing Rise of Nvidia

Nvidia’s headquarters in Santa Clara, California. (Photo from nvidia.com)

Nvidia was founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, electrical engineers who started the company with the goal of designing computer chips that would increase the realism of images in video games. The firm achieved a key breakthrough in 1999 when it invented the graphics processing unit, or GPU, which it marketed under the name GeForce256. In 2001, Microsoft used a Nvidia chip in its new Xbox video game console, helping Nvidia to become the dominant firm in the market for GPUs.

The technology behind GPUs has turned out to be usable not just for gaming, but also for powering AI—artificial intelligence—software. The market for Nvidia’s chips exploded witth technology giants Google, Microsoft, Facebook and Amazon, as well as many startups ordering large quantites of Nvidia’s chips.

By 2016, Nvidia CEO Jen-Hsun Huang could state in an interview that: “At no time in the history of our company have we been at the center of such large markets. This can be attributed to the fact that we do one thing incredibly well—it’s called GPU computing.” Earlier this year, an article in the Economist noted that: “Access to GPUs, and in particular those made by Nvidia, the leading supplier, is vital for any company that wants to be taken seriously in artificial intelligence (AI).”

Nvidia’s success has been reflected in its stock price. When Nvidia became a public company in 1999 by undertaking an initial public offering (IPO) of stock, a share of the firm’s stock had a price of $0.04, adjusted for later stock splits. The large profits Nvidia has been earning in recent years have caused its stock price to rise to more than $140 dollars a share.

(With a stock split, a firm reduces the price per share of its stock by giving shareholders additional shares while holding the total value of the shares constant. For example, in June of this year Nvidia carried out a 10 for 1 stock split, which gave shareholders nine shares of stock for each share they owned. The total value of the shares was the same, but each share now had a price that was 10 percent of its price before the split. We discuss the stock market in Microeconomics, Chapter 8, Section 8.2, Macroeconomics, Chapter 6, Section 6.2, and Economics, Chapter 8, Section 8.2.)

The following figure from the Wall Street Journal shows the sharp increase in Nvidia’s stock price over the past three years as AI has become an increasingly important part of the economy.

Nvidia’s market capitalization (or market cap)—the total value of all of its outstanding shares of stock—is $3.5 trillion.  How large is that? Torsten Sløk, the chief economist at Apollo, an asset management firm, has noted that, as shown in the following figure, Nvidia’s market cap is larger than the total market caps—the total value of all the publicly traded firms—in five large economies.

Can Nvidia’s great success continue? Will it be able to indefinitely dominate the market for AI chips? As we noted in Apply the Concept “Do Large Firms Live Forever?” in Microeconomics Chapter 14, in the long run, even the most successful firms eventually have their positions undermined by competition. That Nvidia has a larger stock market value than the total value of all the public companies in Germany or the United Kingdom is extraordinary and seems impossible to sustain. It may indicate that investors have bid up the price of Nvidia’s stock above the value that can be justified by a reasonable forecast of its future profits.

There are already some significant threats to Nvidia’s dominant position in the market for AI chips. GPUs were originally designed to improve computer displays of graphics rather than to power AI software. So, one way of competing with Nvidia that some startups are trying to exploit is to design chips specifically for use in AI. It’s also possible that larger chips may make it possible to use fewer chips than when using GPUs, possibly reducing the total cost of the chips necessary to run sophisticated AI software. In addition, existing large technology firms, such as Amazon and Microsoft, have been developing chips that may be able to compete with Nvidia.

As with any firm, Nvidia’s continued success requires it to innovate sufficiently to stay ahead of the many competitors that would like to cut into the firm’s colossal profits.

Glenn’s Interview with Jim Pethokoukis

Glenn discusses Fed policy, the state of the U.S economy, economic growth, China in the world economy, industrial policy, protectionism, and other topics in this episode of the Political Economy podcast from the American Enterprise Institute.

https://podcasts.apple.com/us/podcast/glenn-hubbard-a-pro-growth-policy-agenda/id589914386?i=1000665131415

Glenn’s Op-Ed on the Need for Pro-Growth Policies

(Photo from the New York Times.)

This op-ed orginally appeared in the Wall Street Journal.

Put Growth Back on the Political Agenda

In a campaign season dominated by the past, a central economic topic is missing: growth. Rapid productivity growth raises living standards and incomes. Resources from those higher incomes can boost support for public goods such as national defense and education, or can reconfigure supply chains or shore up social insurance programs. A society without growth requires someone to be worse off for you to be better off. Growth breaks that zero-sum link, making it a political big deal.

So why is the emphasis on growth fading? More than economics is at play. While progress from technological advances and trade generally is popular, the disruption that inevitably accompanies growth and hits individuals, firms and communities has many politicians wary. Such concerns can lead to excessive meddling via industrial policy.

As we approach the next election, the stakes for growth are high. Regaining the faster productivity that prevailed before the global financial crisis requires action. The nonpartisan Congressional Budget Office estimates  potential gross domestic product growth of 1.8% over the coming decade, and somewhat lower after that. Those figures are roughly 1 percentage point lower than the growth rate over the three decades before the pandemic. Many economists believe productivity gains from generative artificial intelligence can raise growth in coming decades. But achieving those gains requires an openness to change that is rare in a political climate stuck in past grievances about disruption—the perennial partner of growth.

Traditionally, economic policy toward growth emphasized support for innovation through basic research. Growth also was fostered by reducing tax burdens on investment, streamlining regulation (which has proliferated during the Biden administration) and expanding markets. These important actions have flagged in recent years. But such attention, while valuable, masks inattention to adverse effects on some individuals and communities, raising concerns about whether open markets advance broad prosperity.

This opened a lane for backward-looking protectionism and industrial policy from Democrats and Republicans alike. Absent strong national-defense arguments (which wouldn’t include tariffs on Canadian steel or objections to Japanese ownership of a U.S. steel company), protectionism limits growth. According to polls by the Chicago Council on Global Affairs, roughly three-fourths of Americans say international trade is good for the economy. Finally, protectionism belies ways in which gains from openness may be preserved, such as by simultaneously offering support for training and work for communities of individuals buffeted by trade and technological change.

On industrial policy, it is true that markets can’t solve every allocation problem. But such concerns underpin arguments for greater federal support of research for new technologies in defense, climate-change mitigation, and private activity, not micromanaged subsidies to firms and industries. If a specific defense activity merits assistance, it could be subsidized. These alternatives mitigate the problems in conventional industrial policy of “winner picking” and, just as important, the failure to abandon losers. It is policymakers’ hyperattention to those buffeted by change that hampers policy effectiveness and, worse, invites rent-seeking behavior and costly regulatory micromanagement.

Examples abound. Appending child-care requirements to the Chips Act and the inaptly named Inflation Reduction Act has little to do with those laws’ industrial policy purpose. The Biden administration’s opposition to Nippon Steel’s acquisition of U.S. Steel raises questions amid the current wave of industrial policy. How is a strong American ally’s efficient operation of an American steel company with U.S. workers an industrial-policy problem? Flip-flops on banning TikTok fuel uncertainty about business operations in the name of industrial policy.

The wrongly focused hyperattention is supposedly grounded in putting American workers first. But it raises three problems. First, the interventions raise the cost of investments, and the jobs they are to create or protect, by using mandates and generating policy uncertainty. Second, they contradict the economic freedom in market economies of voluntary transactions. Absent a strong national-security foundation, why is public policy directing investment in or ownership of assets? Such policies threaten the nation’s long-term prosperity by discouraging investment and invite rent-seeking in a way that voluntary market transactions don’t. Both problems hamstring growth. 

Third, and perhaps most important, such micromanagement misses the economic and political mark of actually helping individuals and communities disrupted by growth-enhancing openness. A more serious agenda would focus on training suited to current markets (through, for example, more assistance to community colleges), on work (through expanding the Earned Income Tax Credit), and on aid to communities hit by prolonged employment loss (through services that enhance business formation and job creation). The federal government could also establish research centers around the country to disseminate ideas for businesses. 

Growth matters—for individual livelihoods, business opportunities and public finances. Pro-growth policies that account for disruption’s effects while encouraging innovation, saving, capital formation, skill development and limited regulation must return to the economic agenda. A shift to prospective, visionary thinking would reorient the bipartisan, backward-looking protectionism and industrial policy that weaken growth and fail to address disruption.

Will the United States Experience a Sustained Boom in the Growth Rate of Labor Productivity?

Blue Planet Studio/Shutterstock

Recent articles in the business press have discussed the possibility that the U.S. economy is entering a period of higher growth in labor productivity:

“Fed’s Goolsbee Says Strong Hiring Hints at Productivity Growth Burst” (link)

“US Productivity Is on the Upswing Again. Will AI Supercharge It?” (link)

“Can America Turn a Productivity Boomlet Into a Boom?” (link)

In Macroeconomics, Chapter 16, Section 16.7 (Economics, Chapter 26, Section 26.7), we highlighted  the role of growth in labor productivity in explaining the growth rate of real GDP using the following equations. First, an identity:

Real GDP = Number of hours worked x (Real GDP/Number of hours worked),

where (Real GDP/Number of hours worked) is labor productivity.

And because an equation in which variables are multiplied together is equal to an equation in which the growth rates of these variables are added together, we have:

Growth rate of real GDP = Growth rate of hours worked + Growth rate of labor productivity

From 1950 to 2023, real GDP grew at annual average rate of 3.1 percent. In recent years, real GDP has been growing more slowly. For example, it grew at a rate of only 2.0 percent from 2000 to 2023. In February 2024, the Congressional Budget Office (CBO) forecasts that real GDP would grow at 2.0 percent from 2024 to 2034. Although the difference between a growth rate of 3.1 percent and a growth rate of 2.0 percent may seem small, if real GDP were to return to growing at 3.1 percent per year, it would be $3.3 trillion larger in 2034 than if it grows at 2.0 percent per year. The additional $3.3 trillion in real GDP would result in higher incomes for U.S. residents and would make it easier for the federal government to reduce the size of the federal budget deficit and to better fund programs such as Social Security and Medicare. (We discuss the issues concerning the federal government’s budget deficit in this earlier blog post.)

Why has growth in real GDP slowed from a 3.1 percent rate to a 2.0 percent rate? The two expressions on the right-hand side of the equation for growth in real GDP—the growth in hours worked and the growth in labor productivity—have both slowed. Slowing population growth and a decline in the average number of hours worked per worker have resulted in the growth rate of hours worked to slow substantially from a rate of 2.0 percent per year from 1950 to 2023 to a forecast rate of only 0.4 percent per year from 2024 to 2034.

Falling birthrates explains most of the decline in population growth. Although lower birthrates have been partially offset by higher levels of immigration in recent years, it seems unlikely that birthrates will increase much even in the long run and levels of immigration also seem unlikely to increase substantially in the future. Therefore, for the growth rate of real GDP to increase significantly requires increases in the rate of growth of labor productivity.

The Bureau of Labor Statistics (BLS) publishes quarterly data on labor productivity. (Note that the BLS series is for labor productivity in the nonfarm business sector rather than for the whole economy. Output of the nonfarm business sector excludes output by government, nonprofit businesses, and households. Over long periods, growth in real GDP per hour worked and growth in real output of the nonfarm business sector per hour worked have similar trends.) The following figure is taken from the BLS report “Productivty and Costs,” which was released on February 1, 2024.

Note that the growth in labor productivity increased during the last three quarters of 2023, whether we measure the growth rate as the percentage change from the same quarter in the previous year or as growth in a particular quarter expressed as anual rate. It’s this increase in labor productivity during 2023 that has led to speculation that labor productivity might be entering a period of higher growth. The following figure shows labor productivity growth, measured as the percentage change from the same quarter in the previous year for the whole period from 1950 to 2023.

The figure indicates that labor productivity has fluctuated substantially over this period. We can note, in particular, productivity growth during two periods: First, from 2011 to 2018, labor productivity grew at the very slow rate of 0.9 percent per year. Some of this slowdown reflected the slow recovery of the U.S. economy from the Great Recession of 2007-2009, but the slowdown persisted long enough to cause concern that the U.S. economy might be entering a period of stagnation or very slow growth.

Second, from 2019 through 2023, labor productivity went through very large swings. Labor productivity experienced strong growth during 2019, then, as the Covid-19 pandemic began affecting the U.S. economy, labor productivity soared through the first half of 2021 before declining for five consecutive quarters from the first quarter of 2022 through the first quarter of 2023—the first time productivity had fallen for that long a period since the BLS first began collecting the data. Although these swings were particularly large, the figure shows that during and in the immediate aftermath of recessions labor productivity typically fluctuates dramatically. The reason for the fluctuations is that firms can be slow to lay workers off at the beginning of a recession—which causes labor productivity to fall—and slow to hire workers back during the beginning of an economy recovery—which causes labor productivity to rise. 

Does the recent increase in labor productivity growth represent a trend? Labor productivity, measured as the percentage change since the same quarter in the previous year, was 2.7 percent during the fourth quarter of 2023—higher than in any quarter since the first quarter of 2021. Measured as the percentage change from the previous quarter at an annual rate, labor productivity grew at a very high average rate of 3.9 during the last three quarters of 2023. It’s this high rate that some observers are pointing to when they wonder whether growth in labor productivity is on an upward trend.

As with any other economic data, you should use caution in interpreting changes in labor productivity over a short period. The productivity data may be subject to large revisions as the two underlying series—real output and hours worked—are revised in coming months. In addition, it’s not clear why the growth rate of labor productivity would be increasing in the long run. The most common reasons advanced are: 1) the productivity gains from the increase in the number of people working from home since the pandemic, 2) businesses’ increased use of artificial intelligence (AI), and 3) potential efficiencies that businesses discovered as they were forced to operate with a shortage of workers during and after the pandemic.

To this point it’s difficult to evaluate the long-run effects of any of these factors. Wconomists and business managers haven’t yet reached a consensus on whether working from home increases or decreases productivity. (The debate is summarized in this National Bureau of Economic Research Working Paper, written by Jose Maria Barrero of Instituto Tecnologico Autonomo de Mexico, and Steven Davis and Nicholas Bloom of Stanford. You may need to access the paper through your university library.)

Many economists believe that AI is a general purpose technology (GPT), which means that it may have broad effects throughout the economy. But to this point, AI hasn’t been adopted widely enough to be a plausible cause of an increase in labor productivity. In addition, as Erik Brynjolfsson and Daniel Rock of MIT and Chad Syverson of the University of Chicago argue in this paper, the introduction of a GPT may initially cause productivity to fall as firms attempt to use an unfamiliar technology. The third reason—efficiency gains resulting from the pandemic—is to this point mainly anecdotal. There are many cases of businesses that discovered efficiencies during and immediately after Covid as they struggled to operate with a smaller workforce, but we don’t yet know whether these cases are sufficiently common to have had a noticeable effect on labor productivity.

So, we’re left with the conclusion that if the high labor productivity growth rates of 2023 can be maintained, the growth rate of real GDP will correspondingly increase more than most economists are expecting. But it’s too early to know whether recent high rates of labor productivty growth are sustainable.

Glenn’s Presentation at the ASSA Session on “The U.S. Economy: Growth, Stagnation or Financial Crisis and Recession?”

Glenn participated in this session hosted by the Society of Policy Modeling and the American Economic Association of Economic Educators and moderated by Dominick Salvatore of Fordham University. (Link to the page for this session in the ASSA program.)

Also making presentations at the session were Robert Barro of Harvard University, Janice Eberly of Northwestern University, Kenneth Rogoff of Harvard University, and John Taylor of Stanford University.

Here is the abstract for Glenn’s presentation:

Economic growth is foundational for living standards and as an objective for economic policy. The emergence of Artificial Intelligence as a General Purpose Technology, on the one hand, and a number of demographic and budget challenges, on the other hand, generate an unusually wide range of future economic outcomes. I focus on key ‘policy’ and ‘political economy’ considerations that increase the likelihood of a more favorable growth path given pre-existing trends and technological possibilities. By ‘policy,’ I consider mechanisms enabling growth through research, taxation, the scope of regulation, and competition. By ‘political economy’ factors, I consider mechanisms to increase economic participation in support of growth and policies that enhance it. I argue that both sets of mechanisms are necessary for a viable pro-growth economic policy framework.

These slides from the presentation highlight some of Glenn’s key points. (Note the cover of the new 9th edition of the textbook in slide 7!)

The Roman Emperor Vespasian Fell Prey to the Lump-of-Labor Fallacy

Bust of the Roman Emperor Vespasian. (Photo from en.wikipedia.org.)

Some people worry that advances in artificial intelligence (AI), particularly the development of chatbots will permanently reduce the number of jobs available in the United States. Technological change is often disruptive, eliminating jobs and sometimes whole industries, but it also creates new industries and new jobs. For example, the development of mass-produced, low-priced automobiles in the early 1900s wiped out many jobs dependent on horse-drawn transportation, including wagon building and blacksmithing. But automobiles created many new jobs not only on automobile assembly lines, but in related industries, including repair shops and gas stations.

Over the long run, total employment in the United States has increased steadily with population growth, indicating that technological change doesn’t decrease the total amount of jobs available. As we discuss in Microeconomics, Chapter 16 (also Economics, Chapter 16), fears that firms will permanently reduce their demand for labor as they increase their use of the capital that embodies technological breakthroughs, date back at least to the late 1700s in England, when textile workers known as Luddites—after their leader Ned Ludd—smashed machinery in an attempt to save their jobs. Since that time, the term Luddite has described people who oppose firms increasing their use of machinery and other capital because they fear the increases will result in permanent job losses.

Economists believe that these fears often stem from the lump-of-labor fallacy, which holds that there is only a fixed amount of work to be performed in the economy. So the more work that machines perform, the less work that will be available for people to perform. As we’ve noted, though, machines are substitutes for labor in some uses—such as when chatbot software replace employees who currently write technical manuals or computer code—they are also complements to labor in other jobs—such as advising firms on how best to use chatbots. 

The lump-of-labor fallacy has a long history, probably because it seems like common sense to many people who see the existing jobs that a new technology destroys, without always being aware of the new jobs that the technology creates. There are historical examples of the lump-of-labor fallacy that predate even the original Luddites.

For instance, in his new book Pax: War and Peace in Rome’s Golden Age, the British historian Tom Holland (not to be confused with the actor of the same name, best known for portraying Spider-Man!), discusses an account by the ancient historian Suetonius of an event during the reign of Vespasian who was Roman emperor from 79 A.D. to 89 A.D. (p. 201):

“An engineer, so it was claimed, had invented a device that would enable columns to be transported to the summit of the [Roman] Capitol at minimal cost; but Vespasian, although intrigued by the invention, refused to employ it. His explanation was a telling one. ‘I have a duty to keep the masses fed.’”

Vespasian had fallen prey to the lump-of-labor fallacy by assuming that eliminating some of the jobs hauling construction materials would reduce the total number of jobs available in Rome. As a result, it would be harder for Roman workers to earn the income required to feed themselves.

Note that, as we discuss in Macroeconomics, Chapters 10 and 11 (also Economics, Chapter 20 and 21), over the long-run, in any economy technological change is the main source of rising incomes. Technological change increases the productivity of workers and the only way for the average worker to consume more output is for the average worker to produce more output. In other words, most economists agree that the main reason that the wages—and, therefore, the standard of living—of the average worker today are much higher than they were in the past is that workers today are much more productive because they have more and better capital to work with.

Although the Roman Empire controlled most of Southern and Western Europe, the Near East, and North Africa for more than 400 years, the living standard of the average citizen of the Empire was no higher at the end of the Empire than it had been at the beginning. Efforts by emperors such as Vespasian to stifle technological progress may be part of the reason why.