In the spring of 2000, nearly 82 percent of Americans between the ages of 25 and 54 had a job. After the dot-com bust, that ratio fell for years, bottoming out at around 78.6 percent in the fall of 2003 before rising gently to 80.2 percent in the spring of 2007. It then fell during the financial crisis, first gradually and then sharply, all the way down to just below 75 percent in the winter of 2009-2010. That recovery was incredibly slow. Four years later, it was still at just 77.1 percent, below the pre-recession peak and far below the millennium peak.
At the time, people talked a lot about why the recovery was so slow, and though it’s been largely memory-holed, one prominent theory at the time was that labor demand was drying up due to automation.
In a 2011 interview with NBC News, Barack Obama explained, “There are some structural issues with our economy where a lot of businesses have learned to become much more efficient with a lot fewer workers. You see it when you go to a bank and you use an ATM. You don’t go to a bank teller. Or you go to the airport and you’re using a kiosk instead of checking at the gate. So all these things have created changes in the economy.” Barbara Ehrenreich wrote in 2015 that “the job-eating maw of technology now threatens even the nimblest and most expensively educated.”
My take, at the time, was that the automation theory was wrong, that the United States was experiencing a sluggish job market recovery due to inadequate aggregate demand.
And one of my favorite pieces of my own Obama-era journalism was a 2015 Vox feature titled “The Automation Myth.” This article made several arguments: that labor market sluggishness was due to under-stimulative fiscal and monetary policy rather than structural shifts, that the rate of productivity growth had gone down rather than up. And also that while fears of productivity-induced job displacement are a perennial issue in human history, fundamentally productivity growth would be good. The real crisis that Americans were living through, I argued, was that even though computers and the internet had transformed society and culture, they were not actually that big of a deal economically.
I believe time proved me correct on this.
By January of 2020, 80.6 percent of prime-age workers had jobs. That measure cratered during Covid, but bounced back rapidly to 80.9 percent by June of 2023. The Obama-era labor market wasn’t sluggish because of ATMs — it was sluggish because policymakers were inflation-averse and settled for a slow recovery. In 2020, a different set of policymakers made different choices and got different results.
That was a lot of throat-clearing because I want to establish my bona fides before I say this: I think it’s time to think seriously about AI-induced job displacement.
I’m aware that this is an argument many people have made about new technologies in the past and that they have mostly been wrong. I’ve been a debunker of a lot of those previous arguments, which is why I think you should take me seriously when I say we should take this seriously.
The gathering storm
Previous rounds of this discourse were driven by a general tendency toward “technological unemployment” hype any time the unemployment rate is high. But one way our current situation differs from that of the 2000 bust or the 2008 recession is that there’s not some huge labor market problem that needs to be explained.
People find macroeconomics counterintuitive. Sitting around in 2013, everyone could see that the unemployment rate was high — maybe your cousin who’d recently finished college had to take a job at Starbucks because she couldn’t find anything better. But the idea that this would be different if Obama had been willing to agree to bigger tax cuts for the rich in exchange for Republicans giving him more money to spend on averting public school budget cuts doesn’t seem correct, unless your cousin had specifically been looking for a teaching job.
Right now, though, the macroeconomy is in equilibrium.
The prime age employment ratio is basically flat. Inflation is undesirably high, but also at a kind of plateau. Notably, the hiring rate has slipped to basically a recession level.
We’re not in a recession, though, because companies aren’t laying people off.
Which is to be expected. If revenue is tanking and you need to do layoffs, you do layoffs. Notably, revenue being down typically means that there’s less work to do. Nobody’s happy about the layoffs, but the people who didn’t get laid off are happy to have not lost their jobs. To show up one day and say, “Sales are fine, but we’re laying off 15 percent of our staff because we think the rest of you can use AI tools to get more done” is an unnatural way to run your business. Most managers prefer not to be thought of as assholes. And the survivors of that kind of purge are going to be pissed off because you’re asking them to work harder and conceding there was no objective need to do it.
There are certainly examples of successful companies doing this kind of thing — notably, a number of tech companies — but it’s a pretty nonstandard business practice.
What we’re seeing instead is that law firms are hiring fewer new junior associates and making fewer new equity partners. Investment banks, similarly, are talking about onboarding fewer entry-level analysts. Of course, if you’re on the bubble for an entry-level job at Goldman Sachs or a major law firm, it’s not like your fallback option is being out on the streets. You’ll probably get hired somewhere else. But the white collar hiring crunch is going to shove a lot of people down the stack of employment opportunities.
And if a cash crunch does hit and organizations do need to lay people off, they’ll almost certainly find that the remaining staff is, in fact, able to get more done thanks to new technology. Unlike in the wake of the Great Recession, this time around we actually are seeing an uptick in productivity growth.
Alexander Bick, Adam Blandin, and David Deming found in a recent paper that 23 percent of employed Americans use generative AI at least once a week on the job and 9 percent use it every day. That is a very rapid pace of adoption by historical standards, and it’s largely taking place without the forcing pressure of an economic downturn. Some people no doubt use AI at work in the spirit of a high school student cheating on his homework, mostly gaining extra time to slack off; human organizations do not fully optimize themselves instantaneously. But the groundwork is laid for jobs to vanish and never return.
Two kinds of disemployment shocks
It’s worth distinguishing between two concerns about technology and unemployment:
One is that technological advance will create a huge pool of permanently unemployable surplus labor, a concern that has been raised since the dawn of the Industrial Revolution and has basically never come to pass.
The other is that technological advance will create a large negative shock to the earnings potential of a group of people big enough to shake the foundations of society.
The second is something that happens over time and I think is in some ways under-discussed. As I’ve mentioned before, my mother was a high-level employee in the analog era art department at Newsweek. She had Xacto knives and rubber cement that she used to cut and paste page elements. She knew how to develop film. She could adjust the line height of typography with little strips of lead. This type of work was never a huge occupational category in the United States. But all cities had at least one newspaper and some kind of printing shop that employed people to do this stuff, and in New York — the center of the book publishing, magazine, and advertising industries — a lot of people had this kind of job. It was all swept away by the rise of specialized desktop publishing software like Quark XPress and Adobe PageMaker.
This labor market niche was too small for its destruction to have upended anything broadly noticeable. And obviously it didn’t render people permanently unemployable. But it was a huge blow to the labor market value of a set of very real and once-remunerative skills. This kind of thing happens all the time in a dynamic economy, and it’s mostly good, unless it happens to you.
Recently, though, we had a large-scale employment shock to manufacturing mostly from the surge of imports from China rather than technology.
This meant that factories that closed during the small recession associated with the dot-com bust mostly didn’t reopen. Neither did the factories that closed during the large recession associated with the 2008 financial crisis. And these plant closures and layoffs impacted so many different kinds of manufacturing facilities simultaneously that there wasn’t much opportunity for someone with experience in one kind of manufacturing to lightly re-train and get another manufacturing job. People did get new jobs, of course, but those jobs rarely reflected their years of experience and accumulated skills, because they had to shift into whole other sectors of the economy. This had a major impact on our politics and society. And while I don’t think the lesson to take away from that experience is “trade is bad,” it’s definitely true that the situation was not handled responsibly by Bush-era policymakers. The scale of the shift was much too big for the government to just say, “It’ll probably work out in the end.”
If you look at the Bureau of Labor Statistics data for high-level occupational groups, the single largest one is “Office and Administrative Support Occupations,” which collectively employs 12 percent of the workforce. That includes 2.7 million financial clerks, 2.8 million customer service representatives, 3.1 million secretaries and administrative assistants, and 2.5 million people listed as “office clerks, general.” I’m not saying all 18 million of these people are going to be out of a job immediately. But a very large share of Americans have jobs in which reading and writing documents is a very large share of the job responsibility. AI has gotten very good at this and is getting better at a rapid pace.
You don’t need to believe maximalist claims about superintelligence to see that a big storm is coming.
The danger of longshoreman mindset
One concern about all of this, of course, is that it would be bad to have a lot of people disemployed by technological progress.
Another is that the fear of this sort of displacement might result in intense political pressure to halt technological progress in order to save jobs.
In New Jersey, a much larger share of people are employed as gas station attendants than in any other state. That’s because self-serve gasoline is illegal in New Jersey. This anti-productivity rule tends to push up the price of gas, which New Jersey then counteracts by having lower gasoline taxes than other northeastern states. But this just means that New Jersey needs to offset that with higher taxes on other things. It’s a tolerable situation, but piling distortions on top of distortions is not a great way to deal with technological progress.
Similarly, America’s longshoremen have fought — successfully — for years to prevent American ports from adopting productivity enhancing technology. This is now enshrined in their new contract and has been enthusiastically embraced by President Trump, because he has a personal relationship with the east coast union leader.
I think there’s a huge risk that, in the face of powerful technological change, we’re going to see a mad scramble to enact longshoremen-style protectionism as a job preservation strategy. It seems like the coming wave of automation will mostly target female-skewing office jobs rather than manly men like Trump’s beloved dockworkers, so Democrats will likely be more open to their pleas than Republicans. But the course of these things is unpredictable, and the longshoreman example illustrates that both parties are open to this kind of argument, if pitched the right way.
What we actually need to be doing is thinking more seriously about the welfare state.
In my Automation Myth piece, I emphasized that the lack of productivity growth was mostly bad. A huge surge in AI-induced productivity, if it happened, would be great for things like the sustainability of Medicare and Social Security. It would mean less pressure to raise the retirement age and more ability to offer things like generous parental leave. But it also might make the specific payroll tax that we currently use to fund our retirement programs less viable. We also probably want to start shifting some of these white collar workers into jobs like teaching, where there are major barriers to entry and where the funding tends to come from the government.
Right now, though, there’s incredible pressure from the tech industry to adopt a Pollyanna-ish attitude. If you start talking about job loss and disemployment, you get called a Luddite and are treated to a lecture about the virtues of progress.
I don’t think that we should be Luddites about this at all.
But we should acknowledge that good things don’t happen automatically. Either we make a plan to harness productivity growth for general advantage or else we’re going to see a chaotic scramble to protect specific jobs and specific sectors. Already we know doctors successfully lobby to use scope of practice laws to stifle competition from nurse practitioners. AI progress should be a huge deal for medicine, not just for drug discovery but for its actual practice. We are this close to living in a world where anyone can get a 24/7 diagnostic consultation for very little money. But given how successful the doctor lobby has been in banning competition from nurse practitioners, there’s no reason to think they won’t try to block robot doctors, too.
The world needs a constructive, thoughtful vision for how these changes will be broadly advantageous or else the whole economy is going to descend into a wild scramble of rent-seeking in which only the interests of the best connected are protected.
I enjoyed this but, I think it deserves a follow-up column about what policy makers (or workers) should actually do to get ready. It kind of has the same flaw as the recent Ezra Klein interview on AI where there was a lot of talk about how government and society are not ready for AI and basically nothing about what anyone should actually be doing.
I'm a software engineer and founder of a startup. I've been using AI tools every day since the release of ChatGPT because of how useful they are for coding. In that time, I've been fairly measured in my predictions about AI — believing it was tremendously useful and would drive productivity gains, while maintaining skepticism about stronger claims of it meaningfully automating high-skilled professions like software engineering.
The introduction of AI agents and the latest model improvements over the past few weeks have completely changed my tone. Coding agents have been the single biggest boost to my productivity in the history of my career, without contest. I conservatively estimate my productivity increased 300% overnight and we're still on the early versions of these tools. This has substantially reduced the number of software engineers we plan to hire over the coming years.
These tools need good judgement to be useful, however. If you want to use them on complicated tasks, it's not enough to tell them to do them without supervision. The best results come from breaking down the task into subtasks, specifying which techniques it should use, etc. Without good judgement, using these tools could be reckless — introducing bad patterns, bugs, and unreadable code into your codebase.
The upshot of this is that senior software engineers just became far more valuable and junior software engineers have just been automated. It's not clear to me what the new onramp will be for new computer science grads entering the software engineering labor force. Developing good judgement in software engineering has traditionally taken years of trudging through the types of tasks that AI now accomplishes in minutes.
I expect that this dynamic will be similar in other industries. It seems that, for a while at least, the ladder has just been pulled for new entrants to the professional labor force. If the elite overproduction crisis (https://www.noahpinion.blog/p/the-elite-overproduction-hypothesis) was bad in the past decade, I fear it pales in comparison to what's coming.