AI Is Making Economists Rethink the Story of Automation

1000320027.jpg

Is artificial intelligence about to put vast numbers of people out of a job? Most economists would argue the answer is no: If technology permanently puts people out of work then why, after centuries of new technologies, are there still so many jobs left? New technologies, they claim, make the economy more productive and allow people to enter new fields — like the shift from agriculture to manufacturing. For that reason, economists have historically shared a general view that whatever upheaval might be caused by technological change, it is “somewhere between benign and benevolent.”

But as new AI models and tools are released almost weekly, that consensus is cracking. Evidence has mounted that digital technologies have helped to increase inequality in the U.S. and around the world. As computers have made knowledge workers more productive, for instance, they have also lowered demand for “middle wage” jobs like clerical worker or administrative assistant. In response, some economists have started to revise their models of how technology — and particularly automation — affects labor markets. “The possibility that technological improvements that increase productivity can actually reduce the wage of all workers is an important point to emphasize because it is often downplayed or ignored,” write MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo in a recent paper.

This new economics of automation retains the core idea that, in the long run, technology often makes workers more productive and so allows their wages to rise. But it also raises two important points: First, there is a big difference between using technology to automate existing work and creating entirely new capabilities that couldn’t exist before. Second, the path of technology depends in part on who’s deciding how it’s used. “AI offers vast tools for augmenting workers and enhancing work. We must master those tools and make them work for us,” writes MIT economist David Autor.

Economists understand the world by building models. Those models attempt to capture the messy, sprawling reality of modern economies but they are intentionally simplified. The aim is to illustrate key choices and tradeoffs that shape the economy. In the process, these models often help to shape what policymakers pay attention to. As economists update their models of automation, they are both changing the field’s understanding of what technology does to workers and shifting the debate about how politicians and regulators should respond.

The race between education and technology.
Economists’ positive view of technology and what it does to labor markets comes from a fairly straightforward place. The story of the 20th century is one of technology seeming to lift most boats. In 1900, 41% of U.S. workers worked in agriculture; by 2000, only 2% did. That transition was made possible by new machinery — like plows and harvesters — that were first horse-powered, then mechanized. At the same time, machinery created a boom in manufacturing. New cities and towns sprang up around new manufacturing businesses, and the U.S. economy became more urban, more industrial, and much, much richer. Wages rose and hours worked fell. The share of workers employed in the most physically grueling occupations dropped dramatically, according to economic historian Robert Gordon. These shifts had many causes and were not unambiguously good. Nonetheless, as Gordon concludes, they improved Americans’ well-being considerably and they could not have occurred without new technology.

Technology has the potential to make us more productive and grows the economic pie — and this dynamic remains central to economists’ understanding of prosperity and growth. Without the mechanization of agriculture, the stark increase in living standards seen in many parts of the world over the last two centuries would not have been possible. This is reflected in the models this history spawned. But those models included a crucial assumption: that no one was left worse off.

Labor economists later complicated that story, by distinguishing between “high-skilled” and “low-skilled” workers — usually approximated using data on education levels. This allowed them to model how technologies could increase inequality. Computers made many knowledge workers much more productive — thanks to innovations like spreadsheets and email — and therefore raised their wages. But it did less for less-educated workers, which gave rise to what Harvard economists Claudia Goldin and Lawrence Katz called a “race between education and technology.”

The thinking behind the “race” was that technology required more education to unlock its productivity benefits, so it created more demand for highly educated workers. That created the potential for inequality, as the wages of in-demand, educated workers rose faster than wages for less educated workers. In mid-twentieth century America, that effect was offset by the fact that more and more people were going to college. Those new graduates fulfilled the demand for more educated workers, and workers without a degree were scarce enough that their wages could rise, too. But when the share of Americans going to college started to plateau in the 1980s — but technology kept improving — new demand for educated workers went unmet. So wages for those with a college degree rose much faster than for those without one, increasing inequality.

These models illustrated what was called “skill-biased technological change” and captured key aspects of how technology shapes work. It generally makes us more productive, but can affect some occupations and skill sets more than others. Despite the simplicity, these models do a decent job of summarizing a century’s worth of wage data — as MIT economist David Autor told me back in 2015 when I asked him about his work in this area.

The problem, Autor said in a recent interview with me, is that older models assumed that technology “might raise some boats more than others, but wouldn’t lower any boats.” However, as digital technology transformed the global economy, there was “lots of evidence people were made worse off.”

When technology creates new kinds of work — and when it doesn’t.
Why do some new inventions seem to lift wages broadly — at least eventually — while others make swaths of workers worse off? Over the last decade, economists have answered that question by distinguishing between technologies that create new kinds of work, and those that merely automate work that already existed.

The journey toward these newer models began in the mid-2000s, as economists took advantage of richer data and started to break work down into individual tasks. For example, a researcher’s job might include collecting data, performing data analysis, and writing reports. At first, all three tasks are done by a person. But over time technology might take over the data collection task, leaving the researcher to do the analysis and write the report.

Task-based models allowed for a more finely grained view of technology’s impact on work, and helped further explain rising inequality in the U.S. and much of the world. Starting in the 1980s, digital technology had begun taking over tasks associated with middle-wage jobs, like bookkeeping or clerical work. It made many highly-skilled tasks — like data analysis and report writing — more productive and more lucrative. But as middle class workers were displaced, many of them moved to lower-wage jobs — and the abundance of available workers often meant that wages fell in some of these already-poorly-paid occupations. From 1980 through the early twenty-first century, job growth bifurcated into highly paid knowledge work and poorly paid services.

The task-based view also clarified the importance of expertise — it matters which tasks the computers take over. It’s better, from a worker’s perspective, to have machines take over rote, low-value work — as long as you’re able to keep utilizing your expertise to perform those higher-value tasks.

One limit of the task-based view, at least at first, was that it assumed the list of potential tasks was static. But as researchers catalogued the way job titles and requirements evolved, they discovered just how many people work in jobs that until recently didn’t exist.

One limit of the task-based view, at least at first, was that it assumed the list of potential tasks was static. But as researchers catalogued the way job titles and requirements evolved, they discovered just how many people work in jobs that until recently didn’t exist.

“More than 60% of employment in 2018 was found in job titles that did not exist in 1940,” according to research by Autor. In 1980, the Census Bureau added controllers of remotely piloted vehicles to its list of occupations; in 2000 it added sommeliers. Those examples highlight the two related ways that technology can create work. In the first case, a new technology directly created a new kind of job that required new skills. In the second, a richer society — full of computers and remotely piloted vehicles — meant that consumers could spend money on new extravagances, like the services of a sommelier.

This “new work” is the key to how technology affects the labor market, according to some economists. In their view, whether technology works out well for workers depends on whether society invents new things for them to excel at — like piloting remote vehicles. If the economy is rapidly adding new occupations that utilize human skill, then it can absorb some number of displaced workers.

Acemoglu and Restrepo formalized this idea in 2018 in a model in which automation is in a race against the creation of new tasks. New technologies displace workers and create new things for them to do; when displacement gets ahead of new work, wages can fall.

Harvard Business Review Home
Sign In
Economics
AI Is Making Economists Rethink the Story of Automation
by Walter Frick
May 27, 2024

picture alliance /Getty Images
Summary. Will artificial intelligence take our jobs? As AI raises new fears about a jobless future, it’s helpful to consider how economists’ understanding of...more
Is artificial intelligence about to put vast numbers of people out of a job? Most economists would argue the answer is no: If technology permanently puts people out of work then why, after centuries of new technologies, are there still so many jobs left? New technologies, they claim, make the economy more productive and allow people to enter new fields — like the shift from agriculture to manufacturing. For that reason, economists have historically shared a general view that whatever upheaval might be caused by technological change, it is “somewhere between benign and benevolent.”

But as new AI models and tools are released almost weekly, that consensus is cracking. Evidence has mounted that digital technologies have helped to increase inequality in the U.S. and around the world. As computers have made knowledge workers more productive, for instance, they have also lowered demand for “middle wage” jobs like clerical worker or administrative assistant. In response, some economists have started to revise their models of how technology — and particularly automation — affects labor markets. “The possibility that technological improvements that increase productivity can actually reduce the wage of all workers is an important point to emphasize because it is often downplayed or ignored,” write MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo in a recent paper.

This new economics of automation retains the core idea that, in the long run, technology often makes workers more productive and so allows their wages to rise. But it also raises two important points: First, there is a big difference between using technology to automate existing work and creating entirely new capabilities that couldn’t exist before. Second, the path of technology depends in part on who’s deciding how it’s used. “AI offers vast tools for augmenting workers and enhancing work. We must master those tools and make them work for us,” writes MIT economist David Autor.

Economists understand the world by building models. Those models attempt to capture the messy, sprawling reality of modern economies but they are intentionally simplified. The aim is to illustrate key choices and tradeoffs that shape the economy. In the process, these models often help to shape what policymakers pay attention to. As economists update their models of automation, they are both changing the field’s understanding of what technology does to workers and shifting the debate about how politicians and regulators should respond.

The race between education and technology.
Economists’ positive view of technology and what it does to labor markets comes from a fairly straightforward place. The story of the 20th century is one of technology seeming to lift most boats. In 1900, 41% of U.S. workers worked in agriculture; by 2000, only 2% did. That transition was made possible by new machinery — like plows and harvesters — that were first horse-powered, then mechanized. At the same time, machinery created a boom in manufacturing. New cities and towns sprang up around new manufacturing businesses, and the U.S. economy became more urban, more industrial, and much, much richer. Wages rose and hours worked fell. The share of workers employed in the most physically grueling occupations dropped dramatically, according to economic historian Robert Gordon. These shifts had many causes and were not unambiguously good. Nonetheless, as Gordon concludes, they improved Americans’ well-being considerably and they could not have occurred without new technology.

Technology has the potential to make us more productive and grows the economic pie — and this dynamic remains central to economists’ understanding of prosperity and growth. Without the mechanization of agriculture, the stark increase in living standards seen in many parts of the world over the last two centuries would not have been possible. This is reflected in the models this history spawned. But those models included a crucial assumption: that no one was left worse off.

Labor economists later complicated that story, by distinguishing between “high-skilled” and “low-skilled” workers — usually approximated using data on education levels. This allowed them to model how technologies could increase inequality. Computers made many knowledge workers much more productive — thanks to innovations like spreadsheets and email — and therefore raised their wages. But it did less for less-educated workers, which gave rise to what Harvard economists Claudia Goldin and Lawrence Katz called a “race between education and technology.”

The thinking behind the “race” was that technology required more education to unlock its productivity benefits, so it created more demand for highly educated workers. That created the potential for inequality, as the wages of in-demand, educated workers rose faster than wages for less educated workers. In mid-twentieth century America, that effect was offset by the fact that more and more people were going to college. Those new graduates fulfilled the demand for more educated workers, and workers without a degree were scarce enough that their wages could rise, too. But when the share of Americans going to college started to plateau in the 1980s — but technology kept improving — new demand for educated workers went unmet. So wages for those with a college degree rose much faster than for those without one, increasing inequality.

These models illustrated what was called “skill-biased technological change” and captured key aspects of how technology shapes work. It generally makes us more productive, but can affect some occupations and skill sets more than others. Despite the simplicity, these models do a decent job of summarizing a century’s worth of wage data — as MIT economist David Autor told me back in 2015 when I asked him about his work in this area.

The problem, Autor said in a recent interview with me, is that older models assumed that technology “might raise some boats more than others, but wouldn’t lower any boats.” However, as digital technology transformed the global economy, there was “lots of evidence people were made worse off.”

When technology creates new kinds of work — and when it doesn’t.
Why do some new inventions seem to lift wages broadly — at least eventually — while others make swaths of workers worse off? Over the last decade, economists have answered that question by distinguishing between technologies that create new kinds of work, and those that merely automate work that already existed.

The journey toward these newer models began in the mid-2000s, as economists took advantage of richer data and started to break work down into individual tasks. For example, a researcher’s job might include collecting data, performing data analysis, and writing reports. At first, all three tasks are done by a person. But over time technology might take over the data collection task, leaving the researcher to do the analysis and write the report.

Task-based models allowed for a more finely grained view of technology’s impact on work, and helped further explain rising inequality in the U.S. and much of the world. Starting in the 1980s, digital technology had begun taking over tasks associated with middle-wage jobs, like bookkeeping or clerical work. It made many highly-skilled tasks — like data analysis and report writing — more productive and more lucrative. But as middle class workers were displaced, many of them moved to lower-wage jobs — and the abundance of available workers often meant that wages fell in some of these already-poorly-paid occupations. From 1980 through the early twenty-first century, job growth bifurcated into highly paid knowledge work and poorly paid services.

The task-based view also clarified the importance of expertise — it matters which tasks the computers take over. It’s better, from a worker’s perspective, to have machines take over rote, low-value work — as long as you’re able to keep utilizing your expertise to perform those higher-value tasks.

One limit of the task-based view, at least at first, was that it assumed the list of potential tasks was static. But as researchers catalogued the way job titles and requirements evolved, they discovered just how many people work in jobs that until recently didn’t exist.

“More than 60% of employment in 2018 was found in job titles that did not exist in 1940,” according to research by Autor. In 1980, the Census Bureau added controllers of remotely piloted vehicles to its list of occupations; in 2000 it added sommeliers. Those examples highlight the two related ways that technology can create work. In the first case, a new technology directly created a new kind of job that required new skills. In the second, a richer society — full of computers and remotely piloted vehicles — meant that consumers could spend money on new extravagances, like the services of a sommelier.

This “new work” is the key to how technology affects the labor market, according to some economists. In their view, whether technology works out well for workers depends on whether society invents new things for them to excel at — like piloting remote vehicles. If the economy is rapidly adding new occupations that utilize human skill, then it can absorb some number of displaced workers.

Acemoglu and Restrepo formalized this idea in 2018 in a model in which automation is in a race against the creation of new tasks. New technologies displace workers and create new things for them to do; when displacement gets ahead of new work, wages can fall.

As economists have reworked their theories, they’ve revised their recommendations, too. In the era of the education-technology race, they often recommended that more people should be going to college or otherwise upping their skills. Today, they’re more likely to emphasize the importance of creating new work, and of supporting policies and institutions.

Technologies “transform our lives” when we use them “to totally transform the set of things we can do,” says Autor. The internet wasn’t just a better way to do phone calls, and electricity wasn’t just an alternative to gas lighting. The most important technologies create whole new categories of human activity. That means both new jobs and new demand, as society becomes wealthier.

This is akin to an old idea in management: “reengineering.” In 1990, Michael Hammer wrote a famous HBR article urging managers to “stop paving the cow paths.” Old processes should not merely be automated, he argued, they should be reimagined from scratch. The implication of the “new task” models from Acemoglu and others is similar. Rather than merely automating the tasks that we currently perform, we should invent entirely new ways for AI to make our lives better — and new ways for humans to develop and use expertise.

Who gets to decide?
The tasks that AI takes on will depend in part on who is making the decisions — and how much input workers have. Last year, Hollywood writers negotiated a new contract focused in part on how AI could be used in the script writing process. Molly Kinder, a fellow at the Brookings Institution, recently published a case study on those negotiations, concluding that:

“The contract the Guild secured in September set a historic precedent: It is up to the writers whether and how they use generative AI as a tool to assist and complement — not replace — them. Ultimately, if generative AI is used, the contract stipulates that writers get full credit and compensation.”

Unions have an uneasy relationship with technology and are often skeptical of automation. Here again, economists’ thinking has evolved. In the 1980s, the most prominent view was that unionized firms had less incentive to invest in innovation and new technologies. Because unions would ensure that workers received most of the benefits, the thinking went, investors had little incentive to spend on R&D. But there are several other ways of thinking about this says John Van Reenen, an economist at the London School of Economics.

Firms that make good use of new technologies usually pay more because they’re more productive and profitable. Van Reenen says that, under the right circumstances, unions can help ensure that workers have the power to claim some share of those profits in the form of higher wages. In one paper, he quotes John Hodge, former head of the U.S. Smelters, who once said, “We won’t work against the machine if we get a fair share of the plunder.”

Input from workers — which unions often facilitate — can also steer companies toward more productive (and worker-friendly) uses of AI. “There is an emerging view that bottom-up innovation is going to be the best way to figure out the best uses of AI,” says Kinder. “So there is a business case for keeping employees in the loop.”

And worker input can guard against a phenomenon that MIT’s Acemoglu has warned about in his research and in a recent book: “so-so technology.” The idea is that companies sometimes automate just enough to replace workers, but without creating big improvements in productivity. Acemoglu uses the example of self-checkout kiosks: They work well enough to take work away from cashiers, but not so well that they provide a major boost to the economy that could fuel demand elsewhere.

• • •

Whole books have been written about the influence that economics has over policymakers. That influence is probably overrated: Policies are determined more by mundane politics than by economics textbooks, for better or worse. Nonetheless, the twists and turns of economics research matter — both because they help us understand how the economy works and because the models themselves do shape public debates over how governments should act.

For decades, economists told a story in which technology raised all boats and — by assumption — no one was left worse off. They were and are right that technology is one of the most reliable ways for a society to raise its living standards. But their recognition of how it can displace and harm workers is overdue.

Economists’ more recent models of automation also provide crucial lessons for the coming tech wave. If AI is going to usher in an era of widely-shared prosperity, two things will need to be true. First, it needs to create new kinds of work that humans can excel at — new tasks that didn’t exist before. Second, decision-making at all levels, from firms to governments, needs to include workers’ voices. That doesn’t necessarily mean giving workers a veto over every potential AI use case or insisting that no jobs be lost. But it does mean ensuring workers have the power to make their perspective heard.

Economists as a group remain less pessimistic about AI than many; few predict a jobless future. They recognize that, like many of the great “general purpose” technologies of past eras, AI has the potential to dramatically improve our lives. The key, as Autor says, is to make it work for us.

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.029
BTC 57400.65
ETH 3108.60
USDT 1.00
SBD 2.42