Politicians need to take AI progress seriously
We need flexible thinking beyond boosters and doomers
It’s hard to predict the future, but I think it’s possible that, by 2028, many of the issues that are crucial today will be swamped by concern about AI and its impacts on the world.
This means we should be looking for political leaders who come across as smart and thoughtful on this topic. It also means that on substance, policy entrepreneurs should be putting more emphasis on helping politicians think through the implications, both positive and negative. Right now, I think the tech world believes that neither politicians nor policymakers really get AI or technology more broadly. That’s, in part true, but it’s also been my experience that people in tech don’t really understand economics or the existing structure of American public policy very well.
Interestingly, though, when JD Vance talked about AI with Ross Douthat recently, he committed the sins that I associate more with tech than with politics: He’s decided to position himself as “pro” AI, so he offers an abstract, high-level take on the benefits and doesn’t really engage with specific questions about the downsides. This is a problem for a politician, because making policy is largely dealing with downsides. Today, looking at the country as a whole, houses burning down doesn’t seem like such a big issue. But one reason house fires aren’t a big issue is that people came together, mostly via the government, to form things like fire departments and to write codes to make buildings safer.
Lots of problems in life are solvable. But they’re only solved if somebody solves them.
AI progress might slow down sharply or fail to pan out. Or it might accelerate us into a short-term singularity or global doom scenario. But it also might be a “normal” economic transformation with lots of upsides and lots of downsides. And in that scenario, it makes a huge difference how successful we are at maximizing those upsides and minimizing the downsides.
What happens if AI boosts productivity?
In the interview, Douthat asks Vance about potential downsides, saying he doesn’t want to talk about existential risk, but about “the way human beings respond to a sense of their own obsolescence.”
Vance, talking like a VC rather than like a politician from Ohio, just says that productivity is good — an answer he would roast someone for offering on trade:
So, one, on the obsolescence point, I think the history of tech and innovation is that while it does cause job disruptions, it more often facilitates human productivity as opposed to replacing human workers. And the example I always give is the bank teller in the 1970s. There were very stark predictions of thousands, hundreds of thousands of bank tellers going out of a job. Poverty and immiseration.
What actually happens is we have more bank tellers today than we did when the A.T.M. was created, but they’re doing slightly different work. More productive. They have pretty good wages relative to other folks in the economy.
One thing about this is that while bank teller employment did continue to increase for years after the invention of the ATM, it peaked in 2007 and has fallen by about 50 percent since then. I would say this mostly shows that it’s hard to predict the timing of technological transitions more than that the forecasts were totally off base.
But more broadly, “it will increase productivity” doesn’t really answer the question here.
Outsourcing major elements of the automotive supply chain to factories in Mexico has increased the productivity of the American auto industry, but it’s also been disruptive to specific communities that lost not only jobs but, in an important sense, their whole purpose. Unlike Vance and his boss, I don’t think the right answer to that is to slap tariffs on everything and try for autarky. But it’s still true that going maximally abstract and pointing to productivity gains doesn’t really address the problem.
Vance motivates his complacency by positing that AI will not only increase productivity, but will do so in a way that makes each individual person’s job cushier and easier. He says, “what might actually happen is that truck drivers are able to work more efficient hours. They’re able to get a little bit more sleep. They’re doing much more on the last mile of delivery than staring at a highway for 13 hours a day.”
Does this make sense? Fully automatic taxis are available right now in multiple American cities. I don’t know exactly how quickly that will scale. But there’s clearly no particular technical obstacle to automating the last mile aspect. The productivity promise of self-driving trucks is very real, but the promise is that it will eliminate truck drivers’ jobs, which is going to be touchy. Lying to people won’t help.
Policy needs to adjust
Speaking of self-driving, it would be a shame if the upshot here is that taking long car rides is more tolerable when the car drives itself, so everyone drives more, so traffic jams get worse, so in the new equilibrium we’re spending more time in the car but not actually traveling longer distances.
This would be a dumb problem to have, because there is a well-known policy solution: congestion pricing. But enacting a limited congestion pricing program in New York was a titanic political struggle, and the Trump administration keeps threatening to kill it. If you’re an engineer or businessperson working on self-driving cars, and other people are bugging you about the downsides of a huge increase in vehicle miles traveled, I think it’s fine to sort of wave your hands and say, “There are easy ways to solve this and large benefits to self-driving.”
But actual politicians can’t wave their hands about this kind of thing; they are the ones who need to deliver the solutions.
I also worry that a lot of the potential upside to self-driving technology won’t be captured unless jurisdictions radically alter their policies around parking. Self-driving should allow us to make dramatically more efficient use of scarce space, but that doesn’t work if it’s illegal.
Beyond the impact on traffic, though, new sources of revenue will be essential, because AI-driven shocks to the labor market are going to destabilize our tax base. At an adequate level of abstraction, tech-driven productivity gains could be the solution to the problem of financing America’s retirement programs. If human labor becomes less necessary, yes, there will be some job losses. But it also becomes dramatically easier to support a dignified life for people who are retired or otherwise not working. Hopefully, instead of mass unemployment, we have a graceful transition to a world where we can sustain generous Social Security and Medicare benefits, plus better-paid family leave programs, more vacations, and other broadly desirable forms of non-work.
But in the real world, Social Security and Medicare are financed by payroll taxes. If payroll is falling relative to the overall economy, then these programs’ bankruptcy will be accelerated even in the context of a more prosperous world.
That’s a dumb, avoidable problem.
But we do need to actually avoid it. In the Blade Runner movies, society has advanced technology but living standards generally seem to be low. That’s not inevitable, and I don’t want us to be doomers about technology, but it is possible to make bad choices — or fail to make good ones — in the face of change, and what actually happens depends on our choices. AI is clearly raising the demand for electricity, and I don’t think the Trump administration’s approach to energy policy, where they pretend air pollution doesn’t exist, offers a desirable solution here. At the same time, trying to de facto ban data centers in a desperate effort to hit arbitrary climate targets isn’t going to work either.
The solution, again, is well-known. Instead of an immiserated population living on a pollution-scarred planet and dreaming of the Off-World Colonies, we should tax pollution externalities and enjoy much lighter taxes on pro-social behavior. But this solution has been well-known for years without being implemented. Turbulent times make good choices more important than ever, but are we going to make them?
Bottlenecks everywhere
The other thing that needs greater economic consideration here is that when part of the economy changes, that affects other parts.
Baumol’s Cost Disease is a famous example of this. Agatha Christie recollects in her memoir that when she was young, she couldn’t imagine being so poor that she couldn’t afford a servant or two, but never thought she’d be rich enough to own a car. What happened is that as productivity and wages rose, it became increasingly expensive to employ people to do servant-type work because it wasn’t benefitting from modern industrial processes. Nostalgia-addled right-wingers often get confused about this and decide that people were richer in the 1950s because there were more stay-at-home moms. It’s actually the opposite. People were poorer, so the opportunity cost of hiring your wife to be a full-time nanny and cook was lower.
We might see some of that in an AI revolution.
Some work will be automated away, but other work won’t be. Maybe because of technical limitations, but also because I think it’s a category error to ask whether AI will be able to play pro basketball or play a live violin concerto. Modern Americans live quite far from subsistence levels, so people pay money for a lot of services that are patently unnecessary because we think they’re fun or prestigious. The popularity of any of these activities might collapse under pressure from new AI-generated content, but it also might not, and the rewards to being great at a currently obscure sport like water polo or lacrosse might explode.
But there’s also a middle ground.
Perhaps someday construction robots will lead to a productivity explosion in building houses and apartments. But from where we sit right now, AI is coming for white- collar work first. This could mean explosive cost growth in the construction sector, where productivity has been generally falling since the 1960s.
AI also isn’t going to solve the regulatory constraints on housing supply that bite especially hard in the most in-demand and economically valuable areas. You can think of Eran Fowler’s iconic “Reality” image as depicting unbalanced growth.
Agatha Christie lived through an era when cars, appliances, and other durable goods became dramatically more abundant, but maids, nannies, and cooks became scarce. This is fondly remembered by some as a time of broadly shared prosperity. A world in which quality-adjusted digital entertainment becomes hyper-abundant but housing becomes scarce seems less appealing.
We need smart, flexible thinking
You may notice that I’ve pulled a bit of a trick here, and after warning about the difficulty of navigating disruptive change, I’ve just advocated for standard-issue technocratic policies — taxing externalities, eliminating costly anti-social regulations — that I thought were good ideas ten or fifteen years ago.
On the one hand, yes, I think these are good ideas.
And I also think this analysis of the kinds of policy changes we need displays a good deal more situational awareness than the majority of what comes out of DC think tanks, which at this point is mostly people rehashing agendas that were cooked up in Obama’s second term and have been stymied by the fact that nobody has won a large congressional majority since 2008.
But my ideas also aren’t good enough or creative enough.
There is an urgent need at this moment in time for smart, flexible thinking that pairs awareness of technological trends and openness to the possibility that the boosters might be right along with the detailed understanding of taxes, regulations, and the existing social safety net that technologists lack. What’s a viable tax regime for a world in which AI progress sharply diminishes the economic value of labor? How should copyright law work if it’s trivially easy to generate “good enough” video, audio, and images?
I do want to give Vance credit for engaging with the subject in a way that Donald Trump obviously isn’t and few Democrats are. But what he’s saying doesn’t really make sense.
It’s a pitch designed to appeal to tech industry people who are worried that voter concern about AI will lead politicians to adopt anti-AI regulations. Vance is signaling solidarity with technologists who are fundamentally optimistic about the possibilities of progress. On some level, I share that optimism. But optimism is contingent on politicians actually addressing the first-order question of how to address dislocations to our labor markets and our tax system. The job isn’t just to say it’ll work out for the best, it’s to make good economic policy choices that make it work out for the best.
Teaching an entire generation of people that they don't need to know how to write and organize their thoughts and just embrace TikTok brain rot seems like it might have implications for society.
It's intriguing to compare this article to one Matt wrote a decade ago, The Automation Myth. [https://www.vox.com/2015/7/27/9038829/automation-myth] There, his statement was that "[D]on't worry that the robots will take your job. Be terrified that they won't.", because if they don't, it will mean that productivity will remain stagnant, and so too will quality of life. Reading Vance's quote is what reminded me of that article. But here, Matt takes a more cautious approach as to what other implications such increased productivity could have. For example, back then, Matt hypothesized how more robots could lead to less work and perhaps a reduction in the age to apply for Social Security, but here Matt worries about the funding structure of Social Security. The two articles could be complimentary more than contradictory, but still good to compare.