Is the iPhone Air Apple’s “New Coke”?

Image created by GPT

Most large firms selling consumer goods continually evaluate which new products they should introduce. Managers of these firms are aware that if they fail to fill a market niche, their competitors or a new firm may develop a product to fill the niche. Similarly, firms search for ways to improve their existing products.

For example, Ferrara Candy, had introduced Nerds in 1983. Although Nerds experienced steady sales over the following years, company managers decided to devote resources to improving the brand. In 2020, they introduced Nerds Gummy Clusters, which an article in the Wall Street Journal describes as being “crunchy outside and gummy inside.” Over five years, sales of Nerds increased from $50 millions to $500 million. Although the company’s market research “suggested that Nerds Gummy Clusters would be a dud … executives at Ferrara Candy went with their guts—and the product became a smash.”

Image of Nerds Gummy Clusters from nerdscandy.com

Firms differ on the extent to which they rely on market research—such as focus groups or polls of consumers—when introducing a new product or overhauling an existing product. Henry Ford became the richest man in the United States by introducing the Model T, the first low-priced and reliable mass-produced automobile. But Ford once remarked that if before introducing the Model T he had asked people the best way to improve transportation they would probably have told him to develop a faster horse.  (Note that there’s a debate as to whether Ford ever actually made this observation.) Apple co-founder Steve Jobs took a similar view, once remaking in an interview that “it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” In another interview, Jobs stated: “We do no market research. We don’t hire consultants.”

Unsurprisingly, not all new products large firms introduce are successful—whether the products were developed as a result of market research or relied on the hunches of a company’s managers. To take two famous examples, consider the products shown in image at the beginning of this post—“New Coke” and the Ford Edsel.

Pepsi and Coke have been in an intense rivalry for decades. In the 1980s, Pepsi began to gain market share at Coke’s expense as a result of television commercials showcasing the “Pepsi Challenge.” The Pepsi Challenge had consumers choose from colas in two unlabeled cups. Consumers overwhelming chose the cup containing Pepsi. Coke’s management came to believe that Pepsi was winning the blind taste tests because Pepsi was sweeter than Coke and consumers tend to favor sweeter colas. In 1985, Coke’s managers decided to replace the existing Coke formula—which had been largely unchanged for almost 100 years—with New Coke, which had a sweeter taste. Unfortunately for Coke’s managers, consumers’ reaction to New Coke was strongly negative. Less than three months later, the company reintroduced the original Coke, now labeled “Coke Classic.” Although Coke produced both versions of the cola for a number of years, eventually they stopped selling New Coke.

Through the 1920s, the Ford Motor Company produced only two car models—the low-priced Model T and the high-priced Lincoln. That strategy left an opening for General Motors during the 1920s to introduce a variety of car models at a number of price levels. Ford scrambled during the 1930s and after the end of World War II in 1945 to add new models that would compete directly with some of GM’s models. After a major investment in new capacity and an elaborate marketing campaign, Ford introduced the Edsel in September 1957 to compete against GM’s mid-priced models: Pontiac, Oldsmobile, and Buick.

Unfortunately, the Edsel was introduced during a sharp, although relatively short, economic recession. As we discuss in Macroeconomics, Chapter 13 (Economics, Chapter 23), consumers typically cut back on purchases of consumer durables like automobiles during a recession. In addition, the Edsel suffered from reliability problems and many consumers disliked the unusual design, particularly of the front of the car. Consumers were also puzzled by the name Edsel. Ford CEO Henry Ford II was the grandson of Henry Ford and the son of Edsel Ford, who had died in 1943. Henry Ford II named in the car in honor of his father but the unusual name didn’t appeal to consumers. Ford ceased production of the car in November 1959 after losing $250 million, which was one of the largest losses in business history to that point. The name “Edsel” has lived on as a synonym for a disastrous product launch.

Screenshot

Image of iPhone Air from apple.com

Apple earns about half of its revenue and more than half of its profit from iPhone sales. Making sure that it is able to match or exceed the smartphone features offered by competitors is a top priority for CEO Tim Cook and other Apple managers. Because Apple’s iPhones are higher-priced than many other smartphones, Apple has tried various approaches to competing in the market for lower-priced smartphones.

In 2013, Apple was successful in introducing the iPad Air, a thinner, lower-priced version of its popular iPad. Apple introduced the iPhone Air in September 2025, hoping to duplicate the success of the iPad Air. The iPhone Air has a titanium frame and is lighter than the regular iPhone model. The Air is also thinner, which means that its camera, speaker, and its battery are all a step down from the regular iPhone 17 model. In addition, while the iPhone Air’s price is $100 lower than the iPhone 17 Pro, it’s $200 higher than the base model iPhone 17.

Unlike with the iPad Air, Apple doesn’t seem to have aimed the iPhone Air at consumers looking for a lower-priced alternative. Instead, Apple appears to have targeted consumers who value a thinner, lighter phone that appears more stylish, because of its titanium frame, and who are willing to sacrifice some camera and sound quality, as well as battery life. An article in the Wall Street Journal declared that: “The Air is the company’s most innovative smartphone design since the iPhone X in 2017.”  As it has turned out, there are apparently fewer consumers who value this mix of features in a smartphone than Apple had expected.

Sales were sufficiently disappointing that within a month of its introduction, Apple ordered suppliers to cut back production of iPhone Air components by more than 80 percent. Apple was expected to produce 1 million fewer iPhone Airs during 2025 than the company had initially planned. An article in the Wall Street Journal labeled the iPhone Air “a marketing win and a sales flop.” According to a survey by the KeyBanc investment firm there was “virtually no demand for [the] iPhone Air.”

Was Apple having its New Coke moment? There seems little doubt that the iPhone Air has been a very disappointing new product launch. But its very slow sales haven’t inflicted nearly the damage that New Coke caused Coca-Cola or that the Edsel caused Ford. A particularly damaging aspect of New Coke was that was meant as a replacement for the existing Coke, which was being pulled from production. The result was a larger decline in sales than if New Coke had been offered for sale alongside the existing Coke. Similarly, Ford set up a whole new division of the company to produce and sell the Edsel. When Edsel production had to be stopped after only two years, the losses were much greater than they would have been if Edsel production hadn’t been planned to be such a large fraction of Ford’s total production of automobiles.

Although very slow iPhone Air sales have caused Apple to incur losses on the model, the Air was meant to be one of several iPhone models and not the only iPhone model. Clearly investors don’t believe that problems with the Air will matter much to Apple’s profits in the long run. The following graphic from the Wall Street Journal shows that Apple’s stock price has kept rising even after news of serious problems with Air sales became public in late October.


So, while the iPhone Air will likely go down as a failed product launch, it won’t achieve the legendary status of New Coke or the Edsel.

Is it 1987 for AI?

Image generated by ChatGPT 5 of a 1981 IBM personal computer.

The modern era of information technology began in the 1980s with the spread of personal computers. A key development was the introduction of the IBM personal computer in 1981. The Apple II, designed by Steve Jobs and Steve Wozniak and introduced in 1977, was the first widely used personal computer, but the IBM personal computer had several advantages over the Apple II. For decades, IBM had been the dominant firm in information technology worldwide. The IBM System/360, introduced in 1964, was by far the most successful mainframe computer in the world. Many large U.S. firms depended on IBM to meet their needs for processing payroll, general accounting services, managing inventories, and billing.

Because these firms were often reliant on IBM for installing, maintaining, and servicing their computers, they were reluctant to shift to performing key tasks with personal computers like the Apple II. This reluctance was reinforced by the fact that few managers were familiar with Apple or other early personal computer firms like Commodore or Tandy, which sold the TRS-80 through Radio Shack stores. In addition, many firms lacked the technical staffs to install, maintain, and repair personal computers. Initially, it was easier for firms to rely on IBM to perform these tasks, just as they had long been performing the same tasks for firms’ mainframe computers.

By 1983, the IBM PC had overtaken the Apple II as the best-selling personal computer in the United States. In addition, IBM had decided to rely on other firms to supply its computer chips (Intel) and operating system (Microsoft) rather than develop its own proprietary computer chips and operating system. This so-called open architecture made it possible for other firms, such as Dell and Gateway, to produce personal computers that were similar to IBM’s. The result was to give an incentive for firms to produce software that would run on both the IBM PC and the “clones” produced by other firms, rather than produce software for Apple personal computers. Key software such as the spreadsheet program Lotus 1-2-3 and word processing programs, such as WordPerfect, cemented the dominance of the IBM PC and the IBM clones over Apple, which was largely shut out of the market for business computers.

As personal computers began to be widely used in business, there was a general expectation among economists and policymakers that business productivity would increase. Productivity, measured as output per hour of work, had grown at a fairly rapid average annual rate of 2.8 percent between 1948 and 1972. As we discuss in Macroeconomics, Chapter 10 (Economics, Chapter 20 and Essentials of Economics, Chapter 14) rising productivity is the key to an economy achieving a rising standard of living. Unless output per hour worked increases over time, consumption per person will stagnate. An annual growth rate of 2.8 percent will lead to noticeable increases in the standard of living.

Economists and policymakers were concerned when productivity growth slowed beginning in 1973. From 1973 to 198o, productivity grew at an annual rate of only 1.3 percent—less than half the growth rate from 1948 to 1972. Despite the widespread adoption of personal computers by businesses, during the 1980s, the growth rate of productivity increased only to 1.5 percent. In 1987, Nobel laureate Robert Solow of MIT famously remarked: “You can see the computer age everywhere but in the productivity statistics.” Economists labeled Solow’s observation the “productivity paradox.” With hindsight, it’s now clear that it takes time for businesses to adapt to a new technology, such as personal computers. In addition, the development of the internet, increases in the computing power of personal computers, and the introduction of innovative software were necessary before a significant increase in productivity growth rates occurred in the mid-1990s.

Result when ChatGPT 5 is asked to create an image illustrating ChatGPT

The release of ChatGPT in November 2022 is likely to be seen in the future as at least as important an event in the evolution of information technology as the introduction of the IBM PC in August 1981. Just as with personal computers, many people have been predicting that generative AI programs will have a substantial effect on the labor market and on productivity.

In this recent blog post, we discussed the conflicting evidence as to whether generative AI has been eliminating jobs in some occupations, such as software coding. Has AI had an effect on productivity growth? The following figure shows the rate of productivity growth in each quarter since the fourth quarter of 2022. The figure shows an acceleration in productivity growth beginning in the fourth quarter of 2023. From the fourth quarter of 2023 through the fourth quarter of 2024, productivity grew at an annual rate of 3.1 percent—higher than during the period from 1948 to 1972. Some commentators attributed this surge in productivity to the effects of AI.

However, the increase in productivity growth wasn’t sustained, with the growth rate in the first half of 2025 being only 1.3 percent. That slowdown makes it more likely that the surge in productivity growth was attributable to the recovery from the 2020 Covid recession or was simply an example of the wide fluctuations that can occur in productivity growth. The following figure, showing the entire period since 1948, illustrates how volatile quarterly rates of productivity growth are.

How large an effect will AI ultimately have on the labor market? If many current jobs are replaced by AI is it likely that the unemployment rate will soar? That’s a prediction that has often been made in the media. For instance, Dario Amodei, the CEO of generative AI firm Anthropic, predicted during an interview on CNN that AI will wipe out half of all entry level jobs in the U.S. and cause the unemployment rate to rise to between 10% and 20%.  

Although Amodei is likely correct that AI will wipe out many existing jobs, it’s unlikely that the result will be a large increase in the unemployment rate. As we discuss in Macroeconomics, Chapter 9 (Economics, Chapter 19 and Essentials of Economics, Chapter 13) the U.S. economy creates and destroys millions of jobs every year. Consider, for instance, the following table from the most recent “Job Openings and Labor Turnover” (JOLTS) report from the Bureau of Labor Statistics (BLS). In June 2025, 5.2 million people were hired and 5.1 million left (were “separated” from) their jobs as a result of quitting, being laid off, or being fired.

Most economists believe that one of the strengths of the U.S. economy is the flexibility of the U.S. labor market. With a few exceptions, “employment at will” holds in every state, which means that a business can lay off or fire a worker without having to provide a cause. Unionization rates are also lower in the United States than in many other countries. U.S. workers have less job security than in many other countries, but—crucially—U.S. firms are more willing to hire workers because they can more easily lay them off or fire them if they need to. (We discuss the greater flexibility of U.S. labor markets in Macroeconomics, Chapter 11 (Economics, Chapter 21).)

The flexibility of the U.S. labor market means that it has shrugged off many waves of technological change. AI will have a substantial effect on the economy and on the mix of jobs available. But will the effect be greater than that of electrification in the late nineteenth century or the effect of the automobile in the early twentieth century or the effect of the internet and personal computing in the 1980s and 1990s? The introduction of automobiles wiped out jobs in the horse-drawn vehicle industry, just as the internet has wiped out jobs in brick-and-mortar retailing. People unemployed by technology find other jobs; sometimes the jobs are better than the ones they had and sometimes the jobs are worse. But economic historians have shown that technological change has never caused a spike in the U.S. unemployment rate. It seems likely—but not certain!—that the same will be true of the effects of the AI revolution. 

Which jobs will AI destroy and which new jobs will it create? Except in a rough sense, the truth is that it is very difficult to tell. Attempts to forecast technological change have a dismal history. To take one of many examples, in 1998, Paul Krugman, later to win the Nobel Prize, cast doubt on the importance of the internet: “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.” Krugman, Amodei and other prognosticators of the effects of technological change simply lack the knowledge to make an informed prediction because the required knowledge is spread across millions of people. 

That knowledge only becomes available over time. The actions of consumers and firms interacting in markets mobilize information that is initially known only partially to any one person. In 1945, Friedrich Hayek made this argument in “The Use of Knowledge in Society,” which is one of the most influential economics articles ever written. One of Hayek’s examples is an unexpected decrease in the supply of tin. How will this development affect the economy? We find out only by observing how people adapt to a rising price of tin: “The marvel is that … without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation are made [by the increase in the price of tin] to use the material or its products more sparingly.” People adjust to changing conditions in ways that we lack sufficient information to reliably forecast. (We discuss Hayek’s view of how the market system mobilizes the knowledge of workers, consumers, and firms in Microeconomics, Chapter 2.)

It’s up to millions of engineers, workers, and managers across the economy, often through trial and error, to discover how AI can best reduce the cost of producing goods and services or improve their quality. Competition among firms drives them to make the best use of AI. In the end, AI may result in more people or fewer people being employed in any particular occupation.  At this point, there is no way to know.