Modern industrial capitalism’s bounty has been breathtaking globally and especially in the U.S. It’s tempting, then, to look at critics in the crowd in Monty Python’s “Life of Brian” as they ask, “What have the Romans ever do for us?,” only to be confronted with a large list of contributions. But, in fact, over time, American capitalism has been saved by adapting to big economic changes.
We’re at another turning point, and the pattern of American capitalism’s keeping its innovative and disruptive core by responding, if sometimes slowly, to structural shocks will play out as follows.
The magnitude, scope and speed of technological change surrounding generative artificial intelligence will bring forth a new social insurance aimed at long-term, not just cyclical, impacts of disruption. For individuals, it will include support for work, community colleges and training, and wage insurance for older workers. For places, it will include block grants to communities and areas with high structural unemployment to stimulate new business and job opportunities. Such efforts are a needed departure from a focus on cyclical protection from short-term unemployment toward a longer-term bridge of reconnecting to a changing economy.
These ideas, like America’s historical big responses in land-grant colleges and the GI Bill, combine federal funding support with local approaches (allowing variation in responses to local business and employment opportunities), another hallmark of past U.S. economic policy.
With a stronger economic safety net, the current push toward higher tariffs and protectionism will gradually fade. Protectionism is a wall against change, but it is one that insulates us from progress, too.
A growing budget deficit and strains on public finances will lead to a reliance on consumption taxes to replace the current income tax system; continuing to raise taxes on saving and investment will arrest growth prospects. For instance, a tax on business cash flow, which places a levy on a firm’s revenue minus all expenses including investment, would replace taxes on business income. Domestic production would be enhanced by adding a border adjustment to business taxes—exports would be exempt from taxation, but companies can’t claim a deduction for the cost of imports.
That reform allows a shift from helter-skelter tariffs to tax reform that boosts investment and offers U.S. and foreign firms alike an incentive to invest in the U.S.
These ideas to retain opportunity amid creative destruction will also refresh American capitalism as the nation celebrates its 250th anniversary. They also celebrate the classical liberal ideas of Adam Smith, whose treatise “The Wealth of Nations” appeared the same year. This refresh marries competition’s role in “The Wealth of Nations” and American capitalism with the ability to compete, again a feature of turning points in capitalism in the U.S.
Decades down the road, this “Project 2026” will have preserved the bounty and mass prosperity of American capitalism.
These observations first appeared in the Wall Street Journal, along with predictions from six other economists and economic historians.
Image generated by ChatGPT 5 of a 1981 IBM personal computer.
The modern era of information technology began in the 1980s with the spread of personal computers. A key development was the introduction of the IBM personal computer in 1981. The Apple II, designed by Steve Jobs and Steve Wozniak and introduced in 1977, was the first widely used personal computer, but the IBM personal computer had several advantages over the Apple II. For decades, IBM had been the dominant firm in information technology worldwide. The IBM System/360, introduced in 1964, was by far the most successful mainframe computer in the world. Many large U.S. firms depended on IBM to meet their needs for processing payroll, general accounting services, managing inventories, and billing.
Because these firms were often reliant on IBM for installing, maintaining, and servicing their computers, they were reluctant to shift to performing key tasks with personal computers like the Apple II. This reluctance was reinforced by the fact that few managers were familiar with Apple or other early personal computer firms like Commodore or Tandy, which sold the TRS-80 through Radio Shack stores. In addition, many firms lacked the technical staffs to install, maintain, and repair personal computers. Initially, it was easier for firms to rely on IBM to perform these tasks, just as they had long been performing the same tasks for firms’ mainframe computers.
By 1983, the IBM PC had overtaken the Apple II as the best-selling personal computer in the United States. In addition, IBM had decided to rely on other firms to supply its computer chips (Intel) and operating system (Microsoft) rather than develop its own proprietary computer chips and operating system. This so-called open architecture made it possible for other firms, such as Dell and Gateway, to produce personal computers that were similar to IBM’s. The result was to give an incentive for firms to produce software that would run on both the IBM PC and the “clones” produced by other firms, rather than produce software for Apple personal computers. Key software such as the spreadsheet program Lotus 1-2-3 and word processing programs, such as WordPerfect, cemented the dominance of the IBM PC and the IBM clones over Apple, which was largely shut out of the market for business computers.
As personal computers began to be widely used in business, there was a general expectation among economists and policymakers that business productivity would increase. Productivity, measured as output per hour of work, had grown at a fairly rapid average annual rate of 2.8 percent between 1948 and 1972. As we discuss in Macroeconomics, Chapter 10 (Economics, Chapter 20 and Essentials of Economics, Chapter 14) rising productivity is the key to an economy achieving a rising standard of living. Unless output per hour worked increases over time, consumption per person will stagnate. An annual growth rate of 2.8 percent will lead to noticeable increases in the standard of living.
Economists and policymakers were concerned when productivity growth slowed beginning in 1973. From 1973 to 198o, productivity grew at an annual rate of only 1.3 percent—less than half the growth rate from 1948 to 1972. Despite the widespread adoption of personal computers by businesses, during the 1980s, the growth rate of productivity increased only to 1.5 percent. In 1987, Nobel laureate Robert Solow of MIT famously remarked: “You can see the computer age everywhere but in the productivity statistics.” Economists labeled Solow’s observation the “productivity paradox.” With hindsight, it’s now clear that it takes time for businesses to adapt to a new technology, such as personal computers. In addition, the development of the internet, increases in the computing power of personal computers, and the introduction of innovative software were necessary before a significant increase in productivity growth rates occurred in the mid-1990s.
Result when ChatGPT 5 is asked to create an image illustrating ChatGPT
The release of ChatGPT in November 2022 is likely to be seen in the future as at least as important an event in the evolution of information technology as the introduction of the IBM PC in August 1981. Just as with personal computers, many people have been predicting that generative AI programs will have a substantial effect on the labor market and on productivity.
In this recent blog post, we discussed the conflicting evidence as to whether generative AI has been eliminating jobs in some occupations, such as software coding. Has AI had an effect on productivity growth? The following figure shows the rate of productivity growth in each quarter since the fourth quarter of 2022. The figure shows an acceleration in productivity growth beginning in the fourth quarter of 2023. From the fourth quarter of 2023 through the fourth quarter of 2024, productivity grew at an annual rate of 3.1 percent—higher than during the period from 1948 to 1972. Some commentators attributed this surge in productivity to the effects of AI.
However, the increase in productivity growth wasn’t sustained, with the growth rate in the first half of 2025 being only 1.3 percent. That slowdown makes it more likely that the surge in productivity growth was attributable to the recovery from the 2020 Covid recession or was simply an example of the wide fluctuations that can occur in productivity growth. The following figure, showing the entire period since 1948, illustrates how volatile quarterly rates of productivity growth are.
How large an effect will AI ultimately have on the labor market? If many current jobs are replaced by AI is it likely that the unemployment rate will soar? That’s a prediction that has often been made in the media. For instance, Dario Amodei, the CEO of generative AI firm Anthropic, predicted during an interview on CNN that AI will wipe out half of all entry level jobs in the U.S. and cause the unemployment rate to rise to between 10% and 20%.
Although Amodei is likely correct that AI will wipe out many existing jobs, it’s unlikely that the result will be a large increase in the unemployment rate. As we discuss in Macroeconomics, Chapter 9 (Economics, Chapter 19 and Essentials of Economics, Chapter 13) the U.S. economy creates and destroys millions of jobs every year. Consider, for instance, the following table from the most recent “Job Openings and Labor Turnover” (JOLTS) report from the Bureau of Labor Statistics (BLS). In June 2025, 5.2 million people were hired and 5.1 million left (were “separated” from) their jobs as a result of quitting, being laid off, or being fired.
Most economists believe that one of the strengths of the U.S. economy is the flexibility of the U.S. labor market. With a few exceptions, “employment at will” holds in every state, which means that a business can lay off or fire a worker without having to provide a cause. Unionization rates are also lower in the United States than in many other countries. U.S. workers have less job security than in many other countries, but—crucially—U.S. firms are more willing to hire workers because they can more easily lay them off or fire them if they need to. (We discuss the greater flexibility of U.S. labor markets in Macroeconomics, Chapter 11 (Economics, Chapter 21).)
The flexibility of the U.S. labor market means that it has shrugged off many waves of technological change. AI will have a substantial effect on the economy and on the mix of jobs available. But will the effect be greater than that of electrification in the late nineteenth century or the effect of the automobile in the early twentieth century or the effect of the internet and personal computing in the 1980s and 1990s? The introduction of automobiles wiped out jobs in the horse-drawn vehicle industry, just as the internet has wiped out jobs in brick-and-mortar retailing. People unemployed by technology find other jobs; sometimes the jobs are better than the ones they had and sometimes the jobs are worse. But economic historians have shown that technological change has never caused a spike in the U.S. unemployment rate. It seems likely—but not certain!—that the same will be true of the effects of the AI revolution.
Which jobs will AI destroy and which new jobs will it create? Except in a rough sense, the truth is that it is very difficult to tell. Attempts to forecast technological change have a dismal history. To take one of many examples, in 1998, Paul Krugman, later to win the Nobel Prize, cast doubt on the importance of the internet: “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” Krugman, Amodei and other prognosticators of the effects of technological change simply lack the knowledge to make an informed prediction because the required knowledge is spread across millions of people.
That knowledge only becomes available over time. The actions of consumers and firms interacting in markets mobilize information that is initially known only partially to any one person. In 1945, Friedrich Hayek made this argument in “The Use of Knowledge in Society,” which is one of the most influential economics articles ever written. One of Hayek’s examples is an unexpected decrease in the supply of tin. How will this development affect the economy? We find out only by observing how people adapt to a rising price of tin: “The marvel is that … without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation are made [by the increase in the price of tin] to use the material or its products more sparingly.” People adjust to changing conditions in ways that we lack sufficient information to reliably forecast. (We discuss Hayek’s view of how the market system mobilizes the knowledge of workers, consumers, and firms in Microeconomics, Chapter 2.)
It’s up to millions of engineers, workers, and managers across the economy, often through trial and error, to discover how AI can best reduce the cost of producing goods and services or improve their quality. Competition among firms drives them to make the best use of AI. In the end, AI may result in more people or fewer people being employed in any particular occupation. At this point, there is no way to know.
“Artificial intelligence is profoundly limiting some young Americans’ employment prospects, new research shows.” That’s the opening sentence of a recent opinion column in the Wall Street Journal. The columnist was reacting to a new academic paper by economists Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen of Stanford University. (See also this Substack post by Chandar that summarizes the results of their paper.) The authors find that:
“[S]ince the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment … In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor.”
The authors conclude that “our results are consistent with the hypothesis that generative AI has begun to significantly affect entry-level employment.”
About a month ago, we wrote a blog post looking at whether unemployment among young college graduates has been abnormally high in recent months. The following figure from that post shows that over time, the unemployment rates for the youngest college graduates (the red line) is nearly always above the unemployment rate for the population as a whole (the green line), while the unemployment rate for college graduates 25 to 34 years old (the blue line) is nearly always below the unemployment rate for the population as a whole. In July of this year, the unemployment rate for the population as a whole was 4.2 percent, while the unemployment for college graduates 20 to 24 years old was 8.5 percent, and the unemployment rate for college graduates 25 to 34 years old was 3.8 percent.
As the following figure (also reproduced from that blog post) shows, the increase in unemployment among young college graduates has been concentrated among males. Does higher male unemployment indicate that AI is eliminating jobs, such as software coding, that are disproportionately male? Data journalist John Burn-Murdoch argues against this conclusion, noting that data shows that “early-career coding employment is now tracking ahead of the [U.S.] economy.”
Another recent paper written by Sarah Eckhardt and Nathan Goldschlag of the Economic Innovation Group is also skeptical of the view that firms adopting generative AI programs is reducing employment in certain types of jobs. They use a measure developed by Edward Felton on Princeton University, and Manav Raj and Robert Seamans of New York University of how exposed particular jobs are to AI (AI Occupational Exposure (AIOE)). The following table from Eckhardt and Goldschlag’s paper shows the five most AI exposed jobs and the five least AI exposed jobs.
They divide all occupations into quintiles based on the exposure of the occupations to AI. Their key results are given in the following table, which shows that the occupations that are most exposed to the effects of AI—quintiles 4 and 5—have lower unemployment rates and higher wages than do the occupations that are least exposed to AI.
The Brynjolfsson, Chandar, and Chen paper mentioned at the beginning of this post uses a larger data set of workers by occupation from ADP, a private firm that processes payroll data for about 25 percent of U.S. workers. Figure 1 from their paper, reproduced here, shows that employment of workers in two occupations—software developers and customer service—representative of those occupations most exposted to AI declined sharply after generative AI programs became widely available in late 2022.
They don’t find this pattern for all occupations, as shown in the following figure from their paper.
Finally, they show results by occupational quintiles, with workers ages aged 22 to 25 being hard hit in the two occupational quintiles (4 and 5) most exposted to AI. The data show total employment growth from October 2022 to July 2025 by age group and exposure to AI.
Economics blogger Noah Smith has raised an interesting issue about Brynjolfsson, Chandar, and Chen’s results. Why would we expect that the negative effect of AI on employment to be so highly concentrated among younger workers? Why would employment in the most AI exposed occupations be growing rapidly among workers aged 35 and above? Smith wonders “why companies would be rushing to hire new 40-year-old workers in those AI-exposed occupations.” He continues:
“Think about it. Suppose you’re a manager at a software company, and you realize that the coming of AI coding tools means that you don’t need as many software engineers. Yes, you would probably decide to hire fewer 22-year-old engineers. But would you run out and hire a ton of new 40-year-old engineers?“
Both the papers discussed here are worth reading for their insights on how the labor market is evolving in the generative AI era. But taken together, they indicate that it is probably too early to arrive at firm conclusions about the effects of generative AI on the job market for young college graduates or other groups.
At the close of stock trading on Friday, January 24 at 4 pm EST, Nvidia’s stock had a price of $142.62 per share. When trading reopened at 9:30 am on Monday, January 27, Nvidia’s stock price plunged to $127.51. The total value of all Nvidia’s stock (the firm’s market capitalization or market cap) dropped by $589 billion—the largest one day drop in market cap in history. The following figure from the Wall Street Journal shows movements in Nvidia’s stock price over the past six months.
What happened to cause should a dramatic decline in Nvidia’s stock price? As we discuss in Macroeconomics, Chapter 6 (Economics, Chapter 8, and Money, Banking, and the Financial System, Chapter 6), Nividia’s price of $142.62 at the close of trading on January 24—like the price of any publicly traded stock—reflected all the information available to investors about the company. For the company’s stock to have declined so sharply at the beginning of the next trading day, important new information must have become available—which is exactly what happened.
As we discussed in this blog post from last October, Nvidia has been very successful in producing state-of-the-art computer chips that power the most advanced generative artificial intelligence (AI) software. Even after Monday’s plunge in the value of its stock, Nvidia still had a market cap of nearly $3.5 trillion at the end of the day. It wasn’t news that DeepSeek, a Chinese AI company had produced AI software called R1 that was similar to ChatGTP and other AI software produced by U.S. companies. The news was that R1—the latest version of the software is called V3—appeared to be comparable in many ways to the AI software produced by U.S. firms, but had been produced by DeepSeek despite not using the state-of-the-art Nvidia chips used in those AI programs.
The Biden administration had barred export to China of the newest Navidia chips to keep Chinese firms from surging ahead of U.S. firms in developing AI. DeepSeek claimed to have developed its software using less advanced chips and have trained its software at a much lower cost than U.S. firms have been incurring to train their software. (“Training” refers to the process by which engineers teach software to be able to accurately solve problems and answer questions.) Because DeepSeek’s costs are lower, the company charges less than U.S. AI firms do to use its computer infrastructure to handle business tasks like responding to consumer inquiries.
If the claims regarding DeepSeek’s software are accurate, then AI firms may no longer require the latest Nvidia chips and may be forced to reduce the prices they can charge firms for licensing their software. The demand for electricity generation may also decline if it turns out that the demand for AI data centers, which use very large amounts of power, will be lower than expected.
But on Monday it wasn’t yet clear whether the claims being made about DeepSeek’s software were accurate. Some industry observers speculated that, despite the U.S. prohibition on exporting the latest Nvidia chips to China, DeepSeek had managed to obtain them but was reluctant to admit that it had. There were also questions about whether DeepSeek had actually spent as little as it claimed in training its software.
What happens to the price of Nvidia’s stock during the rest of the week will indicate how investors are evaluating the claims DeepSeek made about its AI software.
Image generated by GTP-4o illustrating labor productivity
Several articles in the business press have discussed the recent increases in labor productivity. For instance, this article appeared in this morning’s Wall Street Journal (a subscription may be required).
The most widely used measure of labor productivity is output per hour of work in the nonfarm business sector. The BLS calculates output in the nonfarm business sector by subtracting from GDP production in the agricultural, government, and nonprofit sectors. (The definitions used by the Bureau of Labor Statistics (BLS) in estimating labor productivity are discussed in the “Technical Notes” that appear at the end of the BLS’s quarterly “Productivity and Costs” releases.) The blue line in the following figure shows the annual growth rate in labor productivity in the nonfarm business sector as measured by the percentage change from the same quarter in the previous year. The green line shows labor productivity growth in manufacturing.
As the figure shows, both labor productivity growth in the nonfarm business sector and labor productivity growth in manufacturing are volatile. The business press has focused on the growth of productivity in the nonfarm business sector during the period from the third quarter of 2023 through the third quarter of 2024. During this time, labor productivity has grown at an average annual rate of 2.5 percent. That growth rate is notably higher than the growth rate that many economists are expecting over the next 10 years. For instance, the Congressional Budget Office (CBO) has forecast that labor productivity will grow at an average annual rate of only 1.6 percent over the period from 2025 to 2034.
The CBO forecasts that the total numbers of hours worked in the economy will grow at an average annual rate of 0.5 percent. Combining that estimate with a 2.5 percent annual rate of growth of labor productivity results in output per person—a measure of the standard of living—increasing by 34 percent by 2034. If labor productivity increases at a rate of only 1.6 percent, then output per person will have increased by only 23 percent by 2034.
The standard of living of the average person in United States increasing 11 percent more would make a noticeable difference in people’s lives by allowing them to consume and save more. Higher rates of labor productivity growth leading to a faster growth rate of income and output would also increase the federal government’s tax revenues, helping to decrease federal budget deficits that are currently forecast to be historically large. (We discuss the components of long-run economic growth in Macroeconomics, Chapter 16, Section 16.7; Economics, Chapter 26, Section 26.7, and the economics of long-run growth in Macroeconomics, Chapter 11; Economics, Chapter 21.)
Can the recent growth rates in labor productivity be maintained over the next 10 years? There is an historical precedent. Labor productivity in the nonfarm business sector grew at an average annual rate of 2.6 percent between 1950 and 1973. But growth rates that high have proven difficult to achieve in more recent years. For instance, from 2008 to 2023, labor productivity grew at an average annual rate of only 1.5 percent. (We discuss the debate over future growth rates in Macroeconomics, Chapter 11, Section 11.3; Economics, Chapter 21, Section 21.3.)
The Wall Street Journal article we cited earlier provides an overview of some of the factors that may account for the recent increase in labor productivity growth rates. The 2020 Covid pandemic may have led to some increases in labor productivity. Workers who temporarily or permanently lost their jobs as businesses closed during the height of the pandemic may have found new jobs that better matched their skills, making them more productive. Similarly, businesses that were forced to operate with fewer workers, may have found ways to restore their previous levels of output with lower levels of employment. These changes may have led to one-time increases in labor productivity at some firms, but are unlikely to result in increased rates of labor productivity growth in the future.
Some businesses have used newly available generative artificial intelligence (AI) software to increase labor productivity by, for instance, using software to replace workers who previously produced marketing materials or responded to customer questions or complaints. It will take at least several years before generative AI software spreads throughout the economy, so it seems too early for it to have had a broad enough effect on the economy to be visible in the productivity data.
Note also that, as the green line in the figure above shows, manufacturing productivity has been lagging recently. From the third quarter of 2023 to the third quarter of 2024, labor productivity in manufacturing has increased at an annual average rate of only 0.4 percent. This slowdown is surprising given that over the long run productivity in manufacturing has typically increased faster than has productivity in the overall economy. It seems unlikely that labor productivity in the overall economy can sustain its recent growth rates if labor productivity growth in manufacturing continues to lag.
Finally, the productivity data are subject to revision as better estimates of output and of hours worked become available. It’s possible that what appear to be rapid rates of productivity growth during the last five quarters may turn out to have been less rapid following data revisions.
So, while the recent increase in the growth rate of labor productivity is an encouraging sign of the strength of the U.S. economy, it’s too soon to tell whether we have entered a sustained period of higher productivity growth.