426 Comments
User's avatar
GuyInPlace's avatar

Teaching an entire generation of people that they don't need to know how to write and organize their thoughts and just embrace TikTok brain rot seems like it might have implications for society.

Expand full comment
Ben Krauss's avatar

What about the many adults who are letting themselves fall under the spell of tik tok. Median IQs are going down everywhere.

I also think there's a chance that younger kids (say 12 and under) might be the generation of backlash to this, doesn't seem very sustainable to just constantly be miserable on social media.

Expand full comment
lindamc's avatar

Some friends and I were talking about this over the weekend and it seems like a possibility. I think that in a previous thread I mentioned seeing kids contentedly reading books while waiting for a restaurant table while their phone-addled parents mindlessly scrolled. I also had to run a lot of errands over the weekend and was pleasantly surprised by the excellent customer service I got, in each case from teenaged boys. Of course these are just a couple of anecdata points, but I will take optimism anywhere I can find some right now.

Expand full comment
David R.'s avatar

I am trying to consume less of my long-form writing intake on screens specifically to set an example for the kids.

Expand full comment
Hilary's avatar

I switched my New Yorker subscription to print for exactly this reason. Now I have small piles of them all over the house that I can pick up when I'm tempted to start scrolling. I think it's great; my spouse is less enthused.

Expand full comment
David R.'s avatar

I should probably get the Inquirer in print instead of digitally, haha. Maybe go back to an Economist subscription too?

Expand full comment
Hilary's avatar

Ok this raises another issue that's been on my mind: When I was a kid my family got the newspaper every day, and because it was there I read it, so I knew what was going on in my region and the world.

When my kid was 4-5 we would pass a newspaper kiosk on our way to preschool and talk about the headlines, but they took the kiosks out a while back. So she doesn't really encounter news at all. I haven't come up with a good fix for this.

Expand full comment
GuyInPlace's avatar

I have a giant bin of these in a rented storage space from before I got married and not looking forward to when we move explaining why that box is so heavy.

Expand full comment
Twirling Towards Freedom's avatar

I was just thinking about how some of my recent interactions with "youths" lately have left me fairly impressed with their generation. Just anecdotal, but maybe the kids are alright.

Expand full comment
Will I Am's avatar

I have to confess, I have 5 books on the table next to my bed that I haven't even started to read because there are just too many shows to watch, too many youtubes saved, and too many Ezra Klein podcasts.

I am afflicted with the same disease!

Expand full comment
TR02's avatar

I find the microplastics pretty disturbing. I'm rooting for work on mitigation. Maybe someone can sell the idea of plastics as unnatural contaminants to the current US government and see if it overcomes their political opposition to public health and the general welfare? Or maybe research in Europe is still reasonable enough for this, albeit not as well-funded as the US (used to be?)

I think even the most pro-business and anti-big-government people should be alarmed at the idea that they have several grams of plastic in their brain, and rising.

Expand full comment
Eliza Rodriguez's avatar

RFK? Plastic in brains is easy enough to understand as a problem that even he might get on board with it.

Expand full comment
GuyInPlace's avatar

Is the worm pro-plastic?

Expand full comment
GuyInPlace's avatar

Good thing I have this plastic spoon to push that plastic spoon out of my brain! It fits perfectly in the spoon-shaped hole.

... I may have created a second problem now.

Expand full comment
Nick Magrino's avatar

Obviously the murder is an extreme situation, but I thought this article was a good illustration of how things are going for a lot of people who are looking at their phones for 14 hours a day.

https://www.nytimes.com/2025/05/22/nyregion/sam-nordquist-trans-man-murder.html

Lot of bleak paragraphs in there.

Expand full comment
Kenny Easwaran's avatar

Let’s try not to do that! Let’s try to teach young people that they *do* need to understand the *important* features of organizing their thoughts, and to be able to recognize when the robot has done the wording in the way that achieves *your* communicative intentions.

Expand full comment
Seneca Plutarchus's avatar

Editing is a difficult skill. Humans generally don’t do as well at things where intermittent supervision is required. One of the reasons semi-autonomous driving is a problem.

Expand full comment
Kenny Easwaran's avatar

Editing definitely is difficult, but there are a lot of ways in which editing someone else’s writing is easier than editing your own! I’ve found this to be extensively true as I have progressed in my career from doing most of my editing on my own writing to doing most of my editing on the writing of grad students, and authors submitting papers to journals I am working for.

I also think there are important differences between the difficulties of distracted driving and the difficulties of distracted editing. Driving is something where errors need to be corrected in real time, while editing is not. I think playing an instrument or doing a dance or playing a sport all have this in common with driving, while writing has commonalities with doing math or construction. I suspect this makes more room for partial automation in these cases.

Though I think there are also useful ways to think of this in terms of collaboration. Music and dance and sport are often famously collaborative endeavors, as writing and math and construction are, while driving cars rarely is (and driving is structured very differently on planes and ships where it is collaborative). Partial automation probably works better for things that can be collaborative.

Expand full comment
Milan Singh's avatar

I agree that it’s easier to edit someone else’s writing than your own, but I think that being a good editor and being a good writer are connected. It’s hard to give good edits if you yourself are a weak writer, and editing other people frequently tends to improve your own writing.

Expand full comment
Kenny Easwaran's avatar

Very much so! We have to learn better how to teach this skill, the same way we taught arithmetical skills in the age of the calculator. (Most mathematicians of the past half century probably do have a much less intuitive understanding of logarithms and nth roots than mathematicians of the previous century did, which is a real loss, though we have gained intuitions of new sorts enabled by the new tools).

Calculators never made it obsolete to learn times tables or even long division, even though these days when I divide by hand I tend to do it by repeatedly reducing a fraction rather than long division.

Expand full comment
Milan Singh's avatar

Fair point. I just realized that if I tried to do long division right now I’d probably have to Google how it works again.

Expand full comment
Dan Quail's avatar

A whole cohort's who's notion of agency is rat paddling burrito deliveries via DoorDash using Klarna.

Expand full comment
Nicholas12's avatar

I absolutely hate that I understood this sentence at all

Expand full comment
Jon R's avatar

I feel proud that I only understood "burrito" and "DoorDash." Mmmm...burritos.

Expand full comment
Dan Quail's avatar

It is depressing.

Expand full comment
srynerson's avatar

"Rat paddling"???

Expand full comment
Dan Quail's avatar

Rats in a Skinner Cage whose brains are broken by operant conditioning.

Expand full comment
srynerson's avatar

Oh, like, "*A* rat paddling *FOR* burrito deliveries"? I was reading "rat paddling" as describing the burrito deliveries.

Expand full comment
Nicholas12's avatar

I wonder the extent this boomerangs around and increases the value of humans who are in the top quintile of written/language based reasoning. We already have wildly increased our productivity in food availability but many people still pay handsomely for quality restaurant meals by elite chefs. I think a yearning for human interaction will long exceed the capacity for AI. It's more a Demand problem than a Supply one.

Many people proselytize AI is the end of lawyers but if you have Bet the Farm litigation or billion dollar business deal who are you realistically hiring, Harvey LLM or WASPy & WASPier LLP?

Expand full comment
StonkyMcLawyer's avatar

I think it does, but it does it for a smaller and smaller group of people. In the law firm example, the bet the company case might have been staffed by 20 attorneys, mostly more junior. I expect in 10 years, AI will have displaced a lot of the junior lawyers (yes, this creates obvious pipeline issues) so that the case is staffed with 10 attorneys instead. These people will continue to be highly compensated-perhaps even more than today. But there will be fewer of them.

Expand full comment
Nathan's avatar

Not 10 years…already happening. The reasoning models, if prompted properly, can already do the junior associate first draft of the motion in limine.

Expand full comment
Johnson's avatar
2dEdited

Maybe kind of, but so far the notable thing about AI in law is that uptake is very very low compared with e.g. software engineering, despite predictions that it would be on the front line of adoption. Lawyers do not care enough to do intricate prompts and are completely unwilling to rely on current-generation AI models for anything requiring legal research (for good reason).

The big exception is in doc review, in which large-scale labor has long been automated and/or outsourced when possible.

Expand full comment
StonkyMcLawyer's avatar

Kinda. Lots of handholding required, but there definitely are lots of efficiencies there.

And I assume some of the efficiencies will result in greater demand rather than just reducing the hours of work required (e.g., more documents are reviewed in discovery and due diligence as a result of AI efficiencies, more in depth research on more issues, etc.).

Expand full comment
Nicholas12's avatar

Good thing Law School Applications are at an all time high this cycle! Surely that cannot possibly backfire.....

Expand full comment
StonkyMcLawyer's avatar

This is probably in part because entry-level unemployment for recent college graduates is already rising.

Expand full comment
Nicholas12's avatar

And GOP presidents tend to increase law school apps which is truly such a perverse trend given how vanishingly few graduates become the top litigators fighting the government in complex appellate law.

Expand full comment
GuyInPlace's avatar

And the entire entry-level tier of jobs could disappear.

Expand full comment
David Abbott's avatar

I think computers will soon be better at creating words than humans. Humans will be useful for things like touch. I predict massage therapy will be a growth industry. More generally, industries where humans try to make one another happy will flourish. However, non-elite acting was pretty much been gutted by tv and movies, so it’s complicated.

Expand full comment
BronxZooCobra's avatar

“ I predict massage therapy will be a growth industry.”

Really?

https://www.aescape.com/

Expand full comment
David Abbott's avatar

This far, AI has created a better facsimile of human language than human skin. Sex dolls are still a joke.

Expand full comment
BronxZooCobra's avatar

Once the sex dolls are perfected is there any chance we don’t go extinct?

Expand full comment
David Abbott's avatar

There are dudes who actually want children. There are women who will go to sperm banks. Even if fertility dropped to 0.8, there would be a genetically viable human population in 17 generations. Thats a long time for things to change.

Expand full comment
Edward Scizorhands's avatar

Go on...

Expand full comment
David Abbott's avatar

If sex dolls were worthwhile, there would be buzz. There isn’t. QED. Haven’t tried one myself.

Expand full comment
Seneca Plutarchus's avatar

The studies I have seen shows that AI is much better at helping lower performing people close the gap with higher performing peers than it is at helping the high performers. So yes, the few high performers may be able to command huge sums marshaling the forces of AI and AI assisted lessers, but the lessers will be completely interchangeable and a dime a dozen in terms of compensation in my estimation.

Expand full comment
Hilary's avatar

The exact opposite seems to be true with software engineering. The current AI models are more useful to experienced engineers and can lead to disastrous results in the hands of inexperienced juniors. They definitely do not make a junior engineer into or even close to a senior engineer.

Expand full comment
Jimmy Hoffa's avatar

In my experience AI is really good when it gets something tedious but repeatable right (like code for plots) and really, really bad when it gets something hard wrong (some solvers) or something slightly wrong that can be hard to catch (wrong method to calculate standard errors for instance)

Expand full comment
Seneca Plutarchus's avatar

I find it interesting that AI is bad at poker, as seen in one of Nate Silver’s last posts. The Nash Equilibrium solves for poker have been done and are published, so even if the LLM is bad at solving itself, you would think just having ingested the solves would let it use them successfully.

Maybe training has to be more specific and honed for high level good playing?

Expand full comment
Hilary's avatar

If by AI you mean LLMs (and not a purpose-built poker algorithm), then it’s not at all surprising that it’s bad at poker. LLMs don’t know anything, they have a statistical model of an enormously large amount of written information. Sure that information contains the algorithms to effectively solve poker, but it also contains a likely much more massive amount of written words about poker strategies that aren’t effective or are only marginally effective.

When LLMs try to give answers, the system returns all possible results that are potentially relevant given its training data and current context (system prompt and chat history), and then pick an option based on certain internal weighting values. In pretty much all of the commercially available models the weights are not tuned to select the statistically most likely response, they have some allowance for randomness/chaos. This is also why hallucinations happen.

Expand full comment
Jimmy Hoffa's avatar

I’ll have to read the article but a great way to making a living at poker is being completely merciless about destroying bad poker players and avoiding good ones so.

Expand full comment
CuriousReader4456's avatar

Much like the top chess engines, top poker engines are better then any human. No need for the LLM to crush humans.

Expand full comment
John Freeman's avatar

"WASPy & WASPier"...?

Expand full comment
Nicholas12's avatar

Just teasing that so many elite "white shoe" law firms are named for long dead founding partners of very patrician sounding white Anglo-Saxon Protestant backgrounds from the 19th and 20th centuries. Algernon Sydney Sullivan (of Sullivan & Cromwell), John Chipman Gray (of Ropes & Gray). Not exactly reflective of the current diversity of the industry or the Catholic and Jewish representation on the federal bench relative to latent population size either. (SCOTUS currently has but one self professed Protestant, KBJ)

Expand full comment
Johnson's avatar
2dEdited

Though it appears that Gorsuch converted to Anglicanism/Episcopalianism from Catholicism while studying at Oxford--you can't get WASPier than that.

Expand full comment
Will I Am's avatar

This is the apocalypse I find the most likely, essentially a version of 2005's Idiocracy that is not in any way funny.

Expand full comment
Anaximander's avatar

My romance/optimism take is that AI will reinvigorate the value of a liberal arts education. There will be more appreciation placed on the ability to think critically, rather than developing some technical skill that an AI is already better than humans at. Philosophy majors, your time in the sun has come!

Expand full comment
David R.'s avatar

When the heck was the last time that the median liberal arts program effectively inculcated critical thinking skills? Lol.

Expand full comment
Ray Jones's avatar

Currently.

People's smugness about how shitty they think education has become is tiresome.

Expand full comment
David R.'s avatar

I think primary and secondary education are fine, in the main.

Tertiary education’s problem is not the educators or the pedagogy, it’s that various liberal arts and business programs have become the catch-all baskets into which *paying* students ill-suited to academic tertiary learning fall and scrape by.

Few universities outside the top tier are incentivized to aim for rigor in these programs.

Expand full comment
Peter Gerdes's avatar

Maybe we should make sure they learn the slide rule too. Ultimately, learning is about helping us actually navigate the world and make good decisions. If you are going to have a calculator everywhere you go memorizing rote calculation rules is stupid, it doesn't help you do anything better.

If the next generation uses AI to skip spending a bunch of effort learning grammar of doing rote work and approaches problems using AI assistants to the maximum and that means they can invent new tech faster, make better financial decisions, more accurately reach conclusions about policy implications that's a victory whether or not that means they rely on those tools or can do what we associated with intelligence when we were young.

Indeed, one of the biggest problems with our education system is that old people decide that what they learned when they went to school is 'real' intelligence and without it you are not really smart and it's why our children waste so much time memorizing things that they'll never use. Just try to give them problems like they'll face in the world and however they figure it out is fine.

Expand full comment
StonkyMcLawyer's avatar

I think this ignores how people are using AI. If they stop attempting to reason on their own, they will lose that ability. And while someone being dependent on a calculator for multiplication isn’t a significant loss in general reasoning, someone being dependent on AI to analyze and answer questions of substance about the world gets much closer to that person no longer having the ability to understand the world around them.

Expand full comment
Peter Gerdes's avatar

Also, I suspect you may just be overestimating how people use their own brains. It's amazing how much most people's thought process is just pattern matching without anything more and our education system resists demanding more before late college with a vengeance. I want to suggest the real distinction is ability to solve hard novel problems not whether it is done with or without AI.

In other words, AI is just exposing the fact that most of our education isn't actually concerned with thinking and I've never seen much intrinsic value in that at all.

Expand full comment
GuyInPlace's avatar

It's so weird that so many pro-AI takes end up at this weird nihilistic point that sounds completely sociopathic.

Expand full comment
Ray Jones's avatar

I continue to be shocked by the number of people who think education is basically useless.

Expand full comment
Hilary's avatar

It tracks. An alternate name for sociopathy is a lack of empathy, and believing that all other people’s brains are mere pattern-matching parrots would certainly qualify as thinking with a lack of empathy.

Expand full comment
StonkyMcLawyer's avatar

I hadn’t realized how many people watched the Matrix and decided that the machines were right.

Expand full comment
Seneca Plutarchus's avatar

I wonder how many people who are actually decent enough at math to do something technical are calculator dependent for multiplication. I bet it’s a marker for poor aptitude.

Expand full comment
Hilary's avatar

Could be, though I’d defend the use of calculators for floats (ironic considering how poorly computers handle floating point math). Multiplication and division of non-whole-numbers is pretty difficult to do in your head unless they are very simple.

Expand full comment
Seneca Plutarchus's avatar

Yes, I would believe that, but I would also believe if you told me there was a high correlation between needing a calculator to do most multiplication and having trouble making change out of a register when given cash.

Expand full comment
Peter Gerdes's avatar

So are they using that AI to solve the same kind of problems you think should be solved? If so, what's the problem? I can't hunt game on my own, start fires on my own or many other things my ancestors could and I don't bother practicing direction sense. If you can reach the kind of conclusions you need to for scientific progress and political deliberation who cares if it happens partly via a program?

Would you feel the same way if we got brain implants that made us smarter since then we could quite literally no longer reason without that machine in our head? There may be other issues with that idea but I don't think a good objection is just to say now 'you' aren't able to reason on your own. In practice you reason better.

Expand full comment
Greg G's avatar

It sounds like you're advocating for a truly slippery slope. Yes, every new technical advance results in many people losing skills. Unlike GPS, AI will be able to do most things people can do. What do you think of the scenario where people rely on AI to handle every communication and to decide what they think about everything, and thus build no cognitive skills? I think actually doing something challenging and meaningful is vital to humanity.

Regarding the brain implants, if the outcome is that the human brain is just a case for an implant that does all the thinking, then yes, that's very bad. We are not just fungible units of cognition, and delegating all of it to the AI is not fine even if outward performance gets better.

Expand full comment
Marc Robbins's avatar

In my proposed dystopian story, all at once all GPS satellites are knocked out and all contact information from our phones is erased. The economy grinds to a halt as no one knows how to drive from point A to point B and when they pull over to ask strangers for driving directions, those people don't have a clue either. Meanwhile, all social connections break down as no one can remember any phone numbers, including of their family and closest friends, leading to alienation and anomie and finally the breakdown of law and order with anarchy and destruction of the society.

Order, and a new society, is born when someone accidentally stumbles across some old Triple A maps hidden away in the attic.

Expand full comment
Monkey staring at a monolith's avatar

The military has done a really good job of keeping many of these skills alive. When I went through training as a lieutenant at Quantico we did basically the first two-thirds of the training curriculum with tools like lensatic compasses, protractors, paper maps, and acetate sheets. Radios were voice-only. GPS and radio data tracking of units only came in at the very end.

I understand that the Navy is now training on sextant navigation.

Expand full comment
Greg G's avatar

Excuse me for a moment while I go print out my contacts. 😅

Expand full comment
Peter Gerdes's avatar

You are characterizing that scenario very contentiously -- there is obviously the skill of effectively using an AI assistant. I'd argue it's just like the transition to literacy. It used to be that people could store virtually everything they would ever use in their memory but once we became literate (and yes the ancients freaked out about how we were losing something) we integrated the written word so intensely that most modern scholars or technicians would find it extremely difficult to do their job if they couldn't open up a reference.

So imagine someone becomes a groundbreaking mathematician or scientist by incorporating AI so heavily into their workflow they would be lost without it -- the way we would if we lost the ability to read. It's not that they are just mindlessly following the AI direction but it's an integral part of how they think and reason. As long as they really will have access to AI pretty much anytime they need it (as we do with books) what's the problem? The problems get solved, good decisions get made, they feel accomplished so what's bad about it?

If your worry is that people will just stop contributing at all because the AI will be truly superior that's a whole different issue and since it would make human labor economically unimportant we would have to rethink alot of society at that point -- but that's a different conversation for a much further future.

--

How is a brain implant different than a heart implant? People used to think it wouldn't 'really' be you if you had an artificial heart. Yes, we may not be fungible but the key question is who is 'us.' Unless you think that nuerons are somehow specially able to generate experiences (not crazy but surely not obviously true) you don't have to be just the nuerons you can be the whole system. I mean if you grow up with the implant why is it any less you than if it turns out some of your nuerons have a deceased in utero siblings DNA.

Point is just that there is nothing special about what evolution happened to give us. If we came with an AI built into our head we'd be fine regarding the full system as us so why does the fact that it didn't evolve that way matter?

Expand full comment
Greg G's avatar

I'm trying to highlight that it's a spectrum, and that it matters where on the spectrum we end up. The "implant in a skull" future for humanity is one I want to avoid, although you make a good point that it's hard to pinpoint exactly why. It's a ship of Theseus style situation. I suppose I would say that if you start with a wooden ship and replace each plank with steel over time, when you have an entirely metal ship, then it's a different one. Maybe it sails better, but a tradeoff has been made.

Expand full comment
GuyInPlace's avatar

Students are using AI to avoid having to do work, which means they don't build the mental muscles necessary for adult life.

Expand full comment
Peter Gerdes's avatar

Yes, I agree but if we wanted we could just actually give them the kind of problems (or at least similarly difficult ones) they will need to solve in adult life.

The problem right now is that we just give them a bunch of hoops to jump through that an AI OR person can just kinda vibe through so they stumble when they get to a real problem. But the problem isn't AI it's that rather than giving people math problems that require creativity and proof we pick ones from a list where we've told them how to solve them step by step and just hope they kinda pick up the ability to guide themselves when they weren't told exactly how to do it by osmosis.

If we weren't too scared to ask people questions we didn't tell them exactly how to do we could actually teach them to think (with the AI in my preference but better w/o too). It's not impossible they come up with plenty of hard novel problems for the math Olympiad every year and sure those are too hard for the average student but you can make easier problems that equally require creativity and mimic what real problems do.

Expand full comment
Monkey staring at a monolith's avatar

Funny enough, my brother and I have both talked about how math education could be improved by teaching use of slide rules. People would certainly understand logarithms more than they do now.

Expand full comment
Peter Gerdes's avatar

What does understand mean in this context? Have an intuition for how quickly the function grows? Or do you mean something more like: be able to derive the properties it has from it's definition as the inverse of e^x or be able to prove various facts about how fast approximations to it converge the way a mathematican might.

I mean, I find it striking that if I go into a math department the people there will often not even have the extent of associations (the function here should be about this big) the students they teach do and will often make more algebra errors bc they don't have the mechanical memory but just rederive what they need. But in actually figuring things out they do much better.

Expand full comment
disinterested's avatar

Your last paragraph makes no sense in the context of this thread. Those teachers clearly learned the things that you are dismissing as equivalent to making a fire and it gave them the mental capacity to tackle the harder problems. You’re doing the underpants gnome meme here!

Expand full comment
Peter Gerdes's avatar

It makes sense regarding the slide rule comment in particular because a slide rule gives you no theoretical understanding only muscle memory.

Expand full comment
Coriolis's avatar

Understanding elementary math has been useful for a very long time. Just because you can push buttons on a calculator doesn't mean you understand what to ask, or what the answers is.

So far, AI is like that, but even more so. Great for those who understand the underlying mechanics and want a fast response, terrible for those who don't.

Expand full comment
Jimmy Hoffa's avatar

Agreed. You can only wander so far away from drill and kill… part of being good at math is recognizing very complex patterns instantaneously and with as little effort as possible so you can save your limited capacity for really hard stuff at the frontier.

Expand full comment
David R.'s avatar

True in slightly differing ways for most every field, I think.

Expand full comment
Marc Robbins's avatar

Jonathan V. Last wrote a somewhat trolly piece recently saying it is fine if students cheat with ChatGPT (https://www.thebulwark.com/p/unpopular-opinion-let-college-students). He saw the requirement to write college papers as so much busy work and so just pull what you need from the LLM. Oddly, he said learning how to *write* isn't that important in college if you replace that by *reading* without delving into how hard it is to learn how to read deeply and unless you can absorb and process what you read (i.e., by writing about it), you really don't know how to read.

Expand full comment
Wyatt Barnett's avatar

They said roughly the same thing about our founding fathers brains being rotted by coffee and pamphlets.

Expand full comment
policy wank's avatar

Pamphlet-brained should become a new sick-burn insult!

Expand full comment
Tom Hitchner's avatar

Some predictions of bad things happening didn’t come true, whereas others have come true.

Expand full comment
David R.'s avatar

We were a good ways away from the frontiers of hacking human dopamine receptors that we've reached today.

Past predictions at times being wrong is not sufficient reason to critically evaluate present ones.

Also worth noting that at the same time folks in North America and Europe were writing polemics about the ills of short-form political writing and caffeine, their Chinese counterparts were writing about the ills of opium.

Within a century China's opium problem peaked with perhaps 25% of the population using it, and as much as 10% severely addicted.

Not all predictions of doom are wrong. Few are permanently right, but that can encompass a world of pain before society claws back from the brink.

Expand full comment
Twirling Towards Freedom's avatar

Books were a terrible invention because we lost the ability to pass on the oral tradition of storytelling.

Expand full comment
Charles Ryder's avatar

>Teaching an entire generation of people that they don't need to know how to write and organize their thoughts and just embrace TikTok brain rot seems like it might have implications for society.<

We'll soon be like the Eloi.

Expand full comment
David Abbott's avatar

Oral cultures also need grammar.

Expand full comment
City Of Trees's avatar

It's intriguing to compare this article to one Matt wrote a decade ago, The Automation Myth. [https://www.vox.com/2015/7/27/9038829/automation-myth] There, his statement was that "[D]on't worry that the robots will take your job. Be terrified that they won't.", because if they don't, it will mean that productivity will remain stagnant, and so too will quality of life. Reading Vance's quote is what reminded me of that article. But here, Matt takes a more cautious approach as to what other implications such increased productivity could have. For example, back then, Matt hypothesized how more robots could lead to less work and perhaps a reduction in the age to apply for Social Security, but here Matt worries about the funding structure of Social Security. The two articles could be complimentary more than contradictory, but still good to compare.

Expand full comment
Jaxon Lee's avatar

I think the difference is that today's generative AI boom is far more capable than any of the machine learning algorithms and robotic advances that existed a decade ago. Back then, it looked like a slowly rising tide. Today it looks like an advancing tidal wave

Expand full comment
Comment Is Not Free's avatar

Back then machines were taking over other peoples jobs, now it has the potential to take over his job.

Expand full comment
GuyInPlace's avatar

There's also the fact that a lot of this disruption is occurring during child development, not when they're in the workforce. That's going to affect cognitive development, grit, etc.

Expand full comment
Jason's avatar

There’s a sweet spot for productivity gains that solves for maximum economic growth and minimal social disruption.

Expand full comment
David Abbott's avatar

I don’t trust politicians to find it.

Expand full comment
Thomas L. Hutcheson's avatar

That spot does not depend on the amount of productivity (more/less), but on other policies adapting to the productivity possibilities.

Expand full comment
Charles Ryder's avatar

So tidal that the unemployment has skyrocketed to a scary...4.2%.

Expand full comment
Maxwell E's avatar

The unemployment rate for recent college graduates has risen dramatically and is now at 6%. Do you see that as a statistical anomaly?

Expand full comment
Charles Ryder's avatar

Not really, no. I see that as a sign of a weakening economy. I mean, I'm not claiming we've repealed the business cycle!

For the record, when I refer to myself as a "skeptic" of mass unemployment doomerism, I'm using that term in the sense I believe it it supposed to mean, ie, "agnostic" or "unconvinced." In other words, *I'm not convinced either way*

So sure, AI could soon trigger a collapse in the net job-creating rise of new sectors. Maybe! I just haven't seen anything I believe is evidence of this (yet).

Again, we've seen a major expansion in the use and importance of AI over the last decade, especially over the last five years. The "the technology isn't ripe enough to start destroying jobs on net" explanation might have been plausible 12 or 15 years ago. But it seems a stretch now: I reckon we should have started seeing at least *some* evidence that the old job creation machine was beginning to be overwhelmed. This is the case implied by AI jobs doomerism, because nobody disputes that individual sectors won't be mauled, or that individual kinds of jobs won't go instinct. It's always been that way. But new sectors that result in a net increase in jobs have always arrived. I really doubt this dynamic has ended.

Expand full comment
Maxwell E's avatar

I genuinely hope you are right. I don’t think we will begin to see catastrophic impacts for another ~18 months or so (90% CI 6mo) — rapidly accelerating after that point.

Expand full comment
John from FL's avatar

The cynical part of me thinks the more cautious approach of today is because the automation is coming for Matt's job/career (and those of Matt's friends, colleagues and social circle) rather than some factory workers in Dayton or Kokomo.

Expand full comment
splendric the wise's avatar

Too cynical I think. Matt comes from money, and Slow Boring grosses more than a million a year, so I doubt he worries much about his own income. Also, for basically his entire adult life, journalism has been dying, so worries about his colleagues would be nothing new between then and now.

Expand full comment
Ven's avatar

You say that, but he’s also admitted to using Claude for research. Some percentage of Matt’s output is already AI-generated.

Expand full comment
disinterested's avatar

You’re using ai-generated in a way people don’t normally think of it. This is like saying using a search engine for research means your output is “google-generated”. Basically no one objects to using LLM as a natural language search engine (well, except maybe in copyright grounds, which I’m sympathetic to)

Expand full comment
Ven's avatar

It’s pretty common to describe something as the “product of googling”.

I don’t see why offensiveness is a relevant criterion.

Expand full comment
disinterested's avatar

I've never heard anyone say that, and I said nothing about "offensiveness". I am saying it's misleading.

Expand full comment
Mike Carmody's avatar

I suspect it also has to do with a decade of watching our government fully fail to handle any other kinds of technological advancement.

Expand full comment
Ken in MIA's avatar

Where did the government have that role but failed?

Expand full comment
Mike Carmody's avatar

I just mean there has been next to no meaningful dealing with any technological advancements of the last ~10-15 years, at least.

EVs, Crypto, social media, smartphones, cheaper nuclear reactors, AI, I could go on and on.

Expand full comment
City Of Trees's avatar

I could see concern for his peers in the parenthetical, but I have high doubts that automation will find a audience for take slinging.

Expand full comment
Eliza Rodriguez's avatar

Oh, it's coming for working class jobs. The most prevalent occupation for non-college-educated men, for example, is transportation. Every single one of those jobs might be automated in the future.

Expand full comment
David_in_Chicago's avatar

Just clarifying because it's a good example. Full Truck Load (FTL) transport is really three jobs: (1) loading, (2) driving, (3) unloading. Automation might eliminate (2) but there's no path forward for (1) and (3) due to the load strapping complexity. If you can't automate the *entire* job, then you can't eliminate the job. This same problem exists for nearly every job. You can automate a % but not 100%.

Expand full comment
Eliza Rodriguez's avatar

You don't think companies will just allocate loading and unloading to other workers? Or, in some cases, hire people who just load and unload? Think about 18-wheelers moving cross country. (If you think auto-driving Tesla Ubers are scary, imagine these monsters driving next to you on the highway!)

Ubers and Lyfts? Auto. Even when they deliver. They'll get the grocers/ restaurants to put it in the truck and customers to get it out. Customers will go for that if their takeout is subsequently cheaper to get delivered.

Amazon is trying to get drones to deliver. Not sure where they are with that. But if AI will be smarter than us, I think it can program drones to put stuff in the right places.

Expand full comment
David_in_Chicago's avatar

"You don't think companies will just allocate loading and unloading to other workers?"

No. Not for FTL. The pick-up locations are far too fragmented and then you'd blow-up the driver wage arbitrage. Possibly for LTL. But even the hub and spoke models there are super complex.

EDIT: Just so we're talking about the same thing ... a lot of FTL looks like this: https://www.shipmoto.com/hubfs/20140221_154946.jpg

Expand full comment
Aaron's avatar

This feels pretty accurate to me. Only thing I would add is it applies to knowledge workers more broadly and not just Matt.

Expand full comment
StonkyMcLawyer's avatar

I doubt that. A decade ago, the internet was already decimating journalism.

Expand full comment
CuriousReader4456's avatar

Slinging takes to an well informed established audience is very hard to replace by an AI.

Expand full comment
Polytropos's avatar

I actually think the different approach is warranted— AI has a much larger risk of becoming a full human worker replacement than factory robots did, and its deployment path risks displacing intellectual labor rather than manual drudgery.

Expand full comment
Just Some Guy's avatar

If we can work less in the future, I'd rather have shorter working hours now than retire earlier. Hopefully by the time I'm 65, 65 won't be what it used to be.

Expand full comment
Patrick's avatar

Yes, but you actually have to change the tax code and the system to allow us to thrive on fewer working hours. It won't just magically happen.

Like, I probably have enough money to retire early, if it weren't for the pesky fact that even a shitty health insurance plan would cost me $1500 a month if I didn't have an employer plan. Oh, and that I cannot withdraw early from my 401ks without a penalty. Are we going to tell people to retire at 50, but still penalize them for withdrawing before 59?

Like Matt says, it is solvable problems all the way down, but we have to actually pull your head out of our collective asses and implement the solutions.

Expand full comment
Comment Is Not Free's avatar

It’d be great to allow more time for life extending activities: decreasing stress, exercise, socializing and healthy nutrition.

Expand full comment
Wandering Llama's avatar

Somewhere in the archives there's a quote by MY on how learning that the agricultural revolution set back living standards illustrated ways in which technological progress is not always good for most people, and this forced him to reevaluate some of his prior thoughts on this subject.

Expand full comment
Quinn Chasan's avatar

His first take was more correct imo. There will still be a trillion and a half ways to make money. It will just be more services and entertainment than knowledge work focused imo.

Expand full comment
StonkyMcLawyer's avatar

The question isn’t the quantity of work but the value. I think people underestimate the risks because advances in technology previously allowed more human capital to be deployed in higher value knowledge work. But technology displacing knowledge work doesn’t necessarily mean moving into yet another higher value category work-we just don’t have enough experience with that kind of transition to be confident in the outcome.

Expand full comment
Quinn Chasan's avatar

I don't think we have any evidence to show what this time is different, then, compared to any other advancement from the printing press to the washing machine. Will it cause social strife? Sure, but will it end meaningful work forever? Call me skeptical.

Expand full comment
StonkyMcLawyer's avatar

I think comparing it to the printing press or the washing machine is a category error. The impact will be very broad, covering everything from lawyers to truckers.

I think the only real comparisons are to the agricultural revolution and the industrial revolution, which just isn’t enough repetitions to be predictive. And both of those involved mostly enhanced physical labor, with resulting reductions in the need for physical labor as a result. If we see something similar for general mental labor, it’s difficult to see what absorbs the human capacity.

Expand full comment
Quinn Chasan's avatar

Fair enough. But in both those cases we still continued to work we just got far more comfortable while doing it. I think more people may be able to (mostly) opt out of the process but esp over the next few generations there will always be goods and services beyond necessities people will want and pay for. If someone from industrial England were dropped into the era of today they may be perfectly happy with a barista job forever based on what that can buy. It's just about expectations really and those will rise as it did in those revolutions.

Expand full comment
StonkyMcLawyer's avatar

I think the challenge will be answering the question of what jobs will humans be better at in 20 years than machines. It is completely unknowable, of course, but also illustrates the risks. Or maybe more directly to your point, are there limits to human consumption that will mean improved efficiency from AI automation reduces the aggregate demand for human labor overall?

Expand full comment
Patrick's avatar

The question is how it all works in the transition period.

No one doubts that automobiles were better for jobs than horse drawn carriages, but it absolutely did suck for blacksmiths, carpenters, carriage drivers, etc.

Now imagine though that this happens to nearly EVERY job in a 5-10 year span. The fact that some people will be better off for it in 15 years isn't particularly helpful to most of us right now.

And that's IF all the pissed off people don't ruin everything before the better future can arrive, the way pissed off people tend to do.

Expand full comment
Tim Huegerich's avatar

Matt directly addressed that past article in his last post on AI and jobs a couple months ago: https://www.slowboring.com/p/its-time-to-take-ai-job-loss-seriously

> In my Automation Myth piece, I emphasized that the lack of productivity growth was mostly bad. A huge surge in AI-induced productivity, if it happened, would be great for things like the sustainability of Medicare and Social Security. It would mean less pressure to raise the retirement age and more ability to offer things like generous parental leave. But it also might make the specific payroll tax that we currently use to fund our retirement programs less viable. We also probably want to start shifting some of these white collar workers into jobs like teaching, where there are major barriers to entry and where the funding tends to come from the government.

> Right now, though, there’s incredible pressure from the tech industry to adopt a Pollyanna-ish attitude. ...

> The world needs a constructive, thoughtful vision for how these changes will be broadly advantageous or else the whole economy is going to descend into a wild scramble of rent-seeking in which only the interests of the best connected are protected.

Expand full comment
NagelsBat's avatar

Negative social and political impact of

Expand full comment
NagelsBat's avatar

Huh that posted weird. It should have been: I think Matt may be responding to the negative social/political impacts of certain productivity enhancing laws that we’ve seen such as China shock. Assuming Matt thinks that negative and nostalgic sentiment for when communities and people felt more useful tips the balance in favor of Trump, he may be a little more wary of just letting drastic changes happen without some sort of guidance to minimize the harm.

Expand full comment
Milan Singh's avatar

I’m really worried about the generation of young people directly below me. Even before Covid, I think social media has had some pretty substantial negative impacts on society — worse mental health, fewer and weaker social relationships, more time wasted on brainrot, etc. Then add in Covid learning loss for kids who were in elementary or middle school during the pandemic, which will lead to worse academic, labor market, and personal development outcomes. Then you add in AI, which makes it really easy to breeze through high school without actually learning, and which can be used as a substitute for real friends or relationships (e.g., that one NYT article about the woman “dating” ChatGPT). I’m skeptical that society will implement regulations on how AI is used in schools, whether AI firms can create AI “friend” models, etc. in time, because we kind of dropped the ball on social media and now that cat’s out of the bag. The big social media companies are too entrenched, too big, and too powerful for regulation to realistically happen at this point (and in some cases there are valid First Amendment concerns, such as with proposed changes to Section 230 or DeSantis’ bill in Florida that he got sued over).

All of this makes me sort of think that the kids are cooked.

Expand full comment
Monkey staring at a monolith's avatar

Milan, seeing you talk about "the kids" makes me feel very, very old.

Expand full comment
Kenny Easwaran's avatar

Us educators need to learn how to restructure teaching, assignments, and evaluations, to incentivize students to do useful things rather than just doing something easy with a chatbot. (Maybe it means asking them to do something interestingly hard with a chatbot, maybe it means asking them to do handwritten essays in class, maybe it’s lots and lots of other things. Maybe it’s not having grades any more.)

Expand full comment
Marc Robbins's avatar

Calls on student in class and says, "I see you've cited Source X ten times in your extremely well-written paper. Please summarize Source X for me right now, lest I give you a big fat F."

Expand full comment
Kenny Easwaran's avatar

That sounds like a failure to adapt. I don’t want to make it tempting to cheat and rely on inevitably-insufficient enforcement to incentivize not cheating - I want to design assignments and lesson structures where doing the thing that helps you develop skills seems like the natural thing to do.

Expand full comment
Marc Robbins's avatar

Milan enters his "kids these days . . . why, back in *my* day" phase.

:-)

Expand full comment
Matt A's avatar

With how quickly technology changes, the concept of "generations" is quickly loosing coherence. "Elder millennials" aren't all that similar to folks born in the late 90s. Kyla Scanlon wrote a really good article a while back breaking up Zoomers into three separate cohorts. My kids have cousins ~5 years older than them, and I have no idea how similar their experiences in grade school will be.

Expand full comment
Chicago Based's avatar

While I am a little concerned about screen time, my kids seem to be doing pretty damn well. Each have good friend groups, are doing well in school and seem to be able to pry themselves from the screen to do real activities.

None of them really watch TV like my generation and their social media interactions are actually social, within their physical social groups. In fact I think Tik Tok and YouTube are TV substitutes.

I don’t doubt the issues everyone brings up on social media are real but I do feel like it’s a recycled message going back to video game parlors, TV and at some point further back, Movie Theaters.

It’s just to say that it seems every generation likes to spend time collectively complaining about the next generation’s fads..

Expand full comment
Ven's avatar

As an elder Millennial, it’s my ticket to permanent economic relevance and my sole hope of ever accumulating enough to retire. The deep ratfucking of generations after mine is an absolute godsend.

Expand full comment
Casey's avatar
2dEdited

From a policy perspective focused on ensuring material needs are met I completely agree with this piece.

But I think the much, much bigger challenge that would come with a future where labor's value is significantly diminished will be that of purpose and meaning. Right now people derive substantial (in some cases, singular) meaning in their lives from the work they do. It's why they get up in the morning, and it's the primary way they define the story of themselves.

I think people *need* to feel that they make real, meaningful contributions to the world. Right now a job does that well enough. What happens when labor isn't needed? What are people supposed to do with their lives? Could religion be an answer? I am partial to that, and really partial to anything that finds meaning in reality. Having kids suddenly become more important? In the Federation of Planets no one *needed* to work but they did pursue vocations based on strong internal values. Unfortunately for us I think one of the great sicknesses of our age is a spiritual sickness where many (if not most) lack any strong internal values.

I think spiritual reconciliation to the potential reality of a future where labor is no longer inherently valuable and meaningful is the harder and potentially more necessary problems to solve. Maybe we all become philosophers. Maybe we rediscover the values of aristocrats, where leisure was the chief pursuit in life.

Expand full comment
Kenny Easwaran's avatar

I think Betty Friedan’s “The Feminine Mystique” might be the book to look at here - it is about the problem of meaninglessness that affected a generation of homemakers whose purpose in work was taken away by automation.

Unfortunately, the main solution there was joining the capitalist workforce, which is the current thing under potential threat. But there was also an expansion of years in education and years in retirement that you see in the declining male labor force participation rate that accompanied the rise in female labor force participation rate.

Expand full comment
splendric the wise's avatar

I’m not sure if that’s a good guide to our anticipated difficulties. The Feminine Mystique, as far as I can tell, didn’t really rely on representative survey evidence to support the thesis that women then were becoming unusually unhappy or suffering from a particular crisis of meaning.

It’s hard to find good data from 1962, when the book came out, but as far as I can tell women in the 60s were not actually particularly unhappy, compared to earlier or later time periods, or compared to men during that time.

So it’s entirely possible that while Betty Friedan and her social circle suffered from feelings of uselessness, this was outweighed by a larger population who were happy to enjoy more leisure.

Expand full comment
Marc Robbins's avatar

Didn't divorce rates skyrocket and women entered the workforce en masse in the years following the period Friedan was talking about? Sounds like voting with your feet.

Expand full comment
splendric the wise's avatar

One theory is that increasing work opportunities and wages increased the opportunity cost of leisure, pulling more women into working more hours.

Another theory is that technological improvements made it easier to be a housewife, which paradoxically made it more boring and less fulfilling, pushing women out of the house in search of meaning.

I’d guess that if the latter were true, we’d see a pattern where housewives were systematically less happy than working women. We might also see real wages for women decline, as the housewife alternative just becomes easier, and consequently worse, over time with continued technological advancement. I don’t believe either is the case.

I’ll admit that women’s general declining happiness over time appears to support the second theory, but my counter there would be that the utility benefits from increased income are mostly zero-sum, so falling aggregate happiness is also consistent with more women choosing to trade away leisure time (real absolute value) in return for labor income (merely relative value).

Expand full comment
Marc Robbins's avatar

Or maybe vast numbers of women were trapped in marriages they hated with no hope of financial independence and then when that changed, women ran to the exit doors.

Although to be fair, the spread of dishwashers and clothes dryers making housewives' lives more boring and less fulfilling is a theory that had never occurred to me before.

Expand full comment
Casey's avatar

Agreed. I think the spike in divorce is a consequence of women no longer needing to be economically dependent on a bad spouse.

Expand full comment
Kenny Easwaran's avatar

What I would probably say to be most careful here is that there was a real phenomenon for some people, though you’re right to raise questions about whether there might be other people, who outnumber the ones for whom this is a real problem, for whom this is actually just a benefit. Still, it’s worth considering a real problem that affects some people, especially if it might affect a lot of people (even if we don’t have proof that it did).

Expand full comment
Ken in MIA's avatar

“Unfortunately, the main solution there was joining the capitalist workforce, which is the current thing under potential threat”

I don’t see why that was unfortunate. Office work or light assembly or retail seems like better ways to earn one’s keep than handwashing dishes and clothes.

In any event, these discussions always leave me wondering why these predictions of broad trends in unemployment due to this particular sort of automation should be taken more seriously than any other example of the lump of labor fallacy.

Expand full comment
A.D.'s avatar
2dEdited

I interpreted it as: "unfortunately, the solution was FOO, the very thing that is under threat" (meaning, FOO isn't a viable solution now - not that FOO(capitalism) was bad))

Expand full comment
Ken in MIA's avatar

It’s also question begging, but that seems to be inherent to these discussions.

Expand full comment
David Abbott's avatar

If the economy is productive enough, I might derive 80% of my meaning from my wife, son and hobbies and 20% from work. However, this won’t happen if AI oligarchs get to keep all the surplus they create.

Expand full comment
Marc Robbins's avatar

And the AI oligarchs will be able to keep all the surplus "they" (?) create by funding political campaigns that distract the electorate from what they're doing in particularly obnoxious ways, making claims of cat-eating Haitians sound like the lofty aspirations of Lincoln's Second Inaugural by comparison.

Sounds like the design for a really lovely society.

Expand full comment
David Abbott's avatar

“They create” was unfortunate synecdoche. Their industry is creating the productivity gains, and there’s no clear formula for how to apportion the credit.

Expand full comment
Marc Robbins's avatar

That's fair enough and to be even fairer a lot of these oligarchs do work really hard and make big decisions that influence the course of events. But when I see things like "they create", I'm reminded of Bertolt Brecht's "A Worker Reads History" (https://allpoetry.com/A-Worker-Reads-History), in part:

Young Alexander conquered India.

He alone?

Caesar beat the Gauls.

Was there not even a cook in his army?

Expand full comment
REF's avatar

This highlights an important question. Does there exist a level of productivity at which the die-hard conservative, will acquiesce to dramatically increased social spending? Or, if they make 100x, will they just spend 90x to keep the others under their boot heel?

Expand full comment
David Abbott's avatar

The die-hards don’t matter much. The John Birchers never had influence, it was the Eisenhowers and Nixons who did

Expand full comment
Jason's avatar

Maybe I’m too optimistic but I feel like there’s a lot of potential for living decent high quality lives without work as long as material needs are met. Many people are very happy in retirement.

As societies we will have to educate kids for this kind of life where showing up for work is replaced by other activities such as leisure, character development, friendship, appreciation for nature and the cosmos and so on.

Expand full comment
Casey's avatar

Not for nothing, and I would be delighted if your prediction were true, but those happiest in retirement have significant sources of meaning in things like their families, travel, passion projects, hobbies, etc.

Expand full comment
Jason's avatar

Won’t those activities be available for the younger newly work liberated cohort?

(assuming they are made whole financially)

Expand full comment
Casey's avatar

Absolutely, but it's the cultivation of those activities and values that concern me. It's not a given thing! Retirees have had their entire lives to form themselves and (the happy ones anyways) have had a job to build their lives around.

Expand full comment
Jason's avatar

Yeah I agree and I probably shouldn’t be so optimistic.

Expand full comment
Greg G's avatar

Many people also spend much of retirement in front of the TV, and younger generations will add social media and whatnot to that mix. There's potential for both better and worse quality of life as the need to work declines.

Expand full comment
Edward Scizorhands's avatar

I think it's *possible* that we can get to a place where people derive meaning from stuff that isn't work. But we haven't figured out how to do it. And the anti-work clowns who insist we can do it are those I trust least to try to manage such a transition.

I'll do my usual recommendation that we can ease our way in with more and more wage subsidy.

Expand full comment
Jason's avatar

Raises a good question about how many hours of work a week are needed to make it meaningful and healthy. I once heard of a study that somehow determined that 20 hours was about right. That might be a policy objective worth exploring through something like job sharing, trading off efficiency for social harmony and well-being.

Maybe it was thirty hours https://www.atlassian.com/blog/productivity/this-is-how-many-hours-you-should-really-be-working

Expand full comment
splendric the wise's avatar

Not original to me but: While there is useful work to be done there is dignity in work. If there is no useful work left to do, there is dignity in leisure.

Expand full comment
GuyInPlace's avatar

One of my worries is that AI is profitable enough to lay people off, but there isn't enough new wealth generated to actually support a robust enough welfare state.

Expand full comment
Danimal's avatar

Or having millions of virtual geniuses and robots that can work 24/7 with no breaks and no salaries will unlock trillions in wealth. Governments will need to capture the majority of that wealth so that we aren't ruled by trillionaire oligarchs. If people are upset today by entitlements like food stamps and healthcare for the poor, wait until the get a load of univeral basic income for the majority of the population.

Expand full comment
splendric the wise's avatar

I’m not sure if that makes sense. Can you have productivity growth without economic growth? Because if not, a growing economy means a growing amount of total production, which can always be redistributed by the political system in theory.

Expand full comment
Marc Robbins's avatar

Yes, "in theory."

Aye, there's the rub.

Expand full comment
GuyInPlace's avatar

I'm thinking of how, for instance, customer service reps can be allocated to chatbots, which frees up resources that used to can be allocated elsewhere in a business, but don't by themselves create value. In practice, they often make the customer experience worse.

Expand full comment
splendric the wise's avatar

If the losses to customer utility are larger than the corporation’s gains, they should be outcompeted by other corporations that choose to employ more expensive (but more useful/friendlier) humans. In other words, we don’t expect that a less productive/efficient process will win out in the marketplace, in general, even if it is cheaper.

If the corporation gains more in profit than the customers lose, then the chatbots are more productive/efficient than human workers. You’ve increased TFP, you haven’t eliminated any labor/capital/land; how does GDP go down? If GDP isn’t going down, you can tax the winners to compensate the losers.

Expand full comment
Eliza Rodriguez's avatar

AI tax?

Expand full comment
Edward Scizorhands's avatar

No tax on tokens!

Expand full comment
Eric's avatar

I suspect that a very narrow swath of people derive meaning from their work. Mostly people at the top of the income distribution. The rest (majority) of people are doing unfulfilling crappy boring jobs. If they lost those jobs, they might feel unfulfilled, but because of their inability to provide for others, not because they derived meaning from scrubbing toilets or bagging groceries.

Expand full comment
Andrew J's avatar

I am relatively optimistic in the medium term on the total amount of work. In fact right now we're trending to a world where due to aging and added education we're spending a shorter and shorter period of our life in the workforce.

Changing norms and economics brought women into the workforce, but we're probably about 20 years past peak workforce participation. I would prefer a structure where any declining need for workforce participation is put into family leave, shorter work week, vacations and such than back loading all of it.

In any case, for the near term it feels like there's decent near term. low hanging fruit

Expand full comment
mathew's avatar

A large garden can take up a lot of time!

Expand full comment
Marc Robbins's avatar

Not when Mattbot 8.0 forces us all to live in 50 story apartment buildings. (I kid, I kid.)

Expand full comment
Marc Robbins's avatar

Or they derive purpose and meaning from the work they *did.* Retired people don't work and most find plenty of purpose and meaning in their lives. So even if being "retired" (i.e., *never* working) isn't a good model for the entire lifecycle, there may still be lessons to be learned here.

Expand full comment
ZRT's avatar

This is how you get techno-dystopian wars of religion

Expand full comment
Thomas L. Hutcheson's avatar

I don't think idealizing a society in which most people do not see themselves being of any value to other people is right.

Expand full comment
Casey's avatar

Did I say that? I would agree?

Expand full comment
Declan's avatar
2dEdited

Matt, I noticed, with considerable humor, that the new Pope, Leo XIV, is way out in front of our politicians, both left and right, on AI. I expect he will challenge our political elites to get serious on this issue in a hurry.

Expand full comment
dysphemistic treadmill's avatar

“…the new Pope, Leo XIV….”

On the one hand, billions of new sim-souls to save. Have to make sure they embrace mother church.

On the other hand, vexing questions about whether they are tainted by original sin. In Adam’s fall sinned we all, but does ‘we’ include LLMs?

Expand full comment
Marc Robbins's avatar

Anyone else think it significant that we've yet to hear a peep (so far as I know) from Ross Douthat about the new pope? It's like Obi-Wan staggering and stumbling when he detects the disturbance in the force, fearing something terrible has happened.

Expand full comment
Johnson's avatar

I am very much outside the tent on this but I think the new pope is politically inscrutable enough that he's not taking much flak from either right or left yet. Which was seemingly part of the reason why he was chosen.

Expand full comment
Declan's avatar
2dEdited

I agree. I think he was chosen to be a bridge builder between the different political factions of the church. Curiously, the Pope's Latin title is Pontifex Maximus which means the Supreme Bridge Builder.

Expand full comment
Charles Ryder's avatar

This is one of Leo's biggest preoccupations, also. Indeed, his papal name references a 19th century pope who was focused on the plight of workers.

https://www.nytimes.com/2025/05/15/world/europe/pope-leo-artificial-intelligence.html

Expand full comment
Declan's avatar

Yes, indeed, Charles. It was no accident he chose the name Leo. I suspect the new Leo will be out soon with an encyclical advocating guaranteed incomes for workers displaced by AI among other things. Curiously, even Elon has been advocating the same thing. Fasten your seat belts.

Expand full comment
Just Some Guy's avatar

On the subject of mass unemployment, the inability to predict exactly where and when unemployment will strike has always been my case for a broad based and nimble safety net that isn't stingy but also doesn't disincentivize re-entering the labor market. Anyways...

My biggest fear so far with AI is just the proliferation of addicting slop content. I don't want teenage boys getting hooked on AI porn, which will be even worse for their brains than regular porn because it can just get more and more specific in a way real human women can't compete with. Potentially even worse than that is AI generated rage bait. I can pretty much guarantee that at some point there will be a riot somewhere due to an AI generated video. This seems to be the pitfall with modern technology. The free market is very good at giving consumers what they want, which is not always a good thing. At a certain point of subsistence, the invisible hand just starts generating technologies that are addicting to consumers with no real benefits. It's starting to feel somewhere between Brave New World and Idiocracy.

Expand full comment
drosophilist's avatar

+100 on your last two sentences

Expand full comment
Marc Robbins's avatar

If AI porn becomes so good that it kills the market for people posting on OnlyFans that would be a . . . . good thing?

This is a debate I'm not looking forward to.

Expand full comment
Just Some Guy's avatar

Not great either way. I mean at least we can take human performers out of the equation. But I'm worried about the effect on the consumer as well.

Expand full comment
GuyInPlace's avatar

And how in turn those guys would act if they actually were around a human woman.

Expand full comment
Just Some Guy's avatar

Either like pigs or just completely disinterested.

Expand full comment
Tran Hung Dao's avatar

We've already seen an unprecedented and massive explosion of porn to the point of near ubiquity and it feels pretty hard to point to any of our society's issues being traced back to it. Even the incel stuff, which seems most plausibly linked to it, is instead generally pointed back to declining relative male status.

And if we look at cross country comparisons (e.g. Vietnam where pornography is banned and Only Fans and PornHub are blocked) it feels hard to point to major concrete societal differences.

Expand full comment
Monkey staring at a monolith's avatar

I have no idea what the answer to this is, but I think we need to do something to prevent teenaged boys from having access to incredibly varied (and likely personally customized) pornography.

Expand full comment
Marc Robbins's avatar

From what I've heard, that horse has already left the barn. It appears that you don't need AI to find just about any type of porn that you could conceivably want on the internet.

Expand full comment
Monkey staring at a monolith's avatar

To some extent it has, but I think AI is going to really supercharge this.

Expand full comment
Sam Tobin-Hochstadt's avatar

It's a mistake to include "AI fizzles out" as a possibility. Obviously AI progress could stop (I think the last year suggests that the singularity is much further away than people have worried) but even if it stopped tomorrow the current state, from chatbots to image generation to self driving to automated coding, is sufficient to have huge economic and societal impacts.

Expand full comment
Kenny Easwaran's avatar

I suspect that the whole idea of “general intelligence” is misguided, and thus we shouldn’t be thinking in terms of AGI or superintelligence or the singularity - but we should be looking at how the special intelligences of horses and dogs have kept them economically relevant in particular employment niches even as they have lost many of their former niches.

Expand full comment
James's avatar

Wait, are we the dogs?

Expand full comment
Kenny Easwaran's avatar

Better to be a search-and-rescue dog or a herding dog or a border control dog (or a pet dog) than a messenger pigeon or plowhorse or one of the other kinds of animals that lost their economic relevance because machines got better at every single aspect of their job.

Expand full comment
Marc Robbins's avatar

Finally! Someone came up with the Democrats' campaign message for 2032!

:-)

Expand full comment
Marc Robbins's avatar

Speaking of great Democratic actions, just this moment I learned that Bruce Pearl, very successful Jewish and very pro-Israel coach of the highly successful Auburn basketball team is rumored to be a potential candidate to replace the abysmal (and crappy Auburn football coach) Tommy Tuberville in the Senate.

https://jewishinsider.com/2025/05/bruce-pearl-basketball-coach-alabama-senate-seat-tommy-tuberville/?utm_source=The+Forward+Association&utm_campaign=f65e19bf38-EMAIL_CAMPAIGN_2025_03_12_05_39_COPY_01&utm_medium=email&utm_term=0_-628507ef87-288558382

It's not Nick Saban or Bear Bryant, but c'mon Alabama voters, you can do it!

Expand full comment
Lapsed Pacifist's avatar

I, for one, can't wait to be economically relevant in a particular employment niche.

Expand full comment
Kenny Easwaran's avatar

We all are already. Most niches are filled by humans, but most humans are hopeless for most niches. Many of the worst economic dislocations of the past few decades have happened because a particular set of niches disappeared, or was taken over by humans with different skills than the ones that used to fill those niches.

Expand full comment
Sam Tobin-Hochstadt's avatar

It's not even true that most niches are filled by humans. The jobs most people had 1000 years ago, for example, have almost all been filled by machines.

Expand full comment
Ethics Gradient's avatar

Do you think humans count as generally intelligent? I'm not sure that "outperforming humans by several orders of magnitude at the same class of tasks in every domain"--which I expect AIs to be able to do because that seems to be the net effect of the abstractions performed by converting data into embedding-spaces--differs in a way that's materially relevant from general intelligence from the perspective of externally visible behaviors or attendant risks.

Expand full comment
Kenny Easwaran's avatar

I don’t think humans count as “generally intelligent”. There are whole branches of psychology dedicated to finding rational tasks that humans perform badly at (though there are some interpretive questions about what they really show).

I do think AI systems will likely outperform humans at wide classes of tasks in a wide range of domains. But that is importantly different from *all* tasks in *all* domains. (Even 99.9% is extremely different from “all” in some relevant respects! Just like the price of listening to music didn’t fall by 100% after the advent of radio and recordings, but just by 99.9%.)

Expand full comment
Ethics Gradient's avatar

What domains do you think humans would retain relevance in? And I guess more importantly, in what (if any) sense do they retain control of the future?

Expand full comment
Kenny Easwaran's avatar

I don’t want to venture a guess about the domains humans would retain relevance in, any more than I would venture a guess as to what a promising math undergrad’s area of comparative advantage will be once they go off to math grad school - the obvious skills humans have now and the math undergrad has now are the ones that will be commonplace in the near future, so some unobvious things that don’t seem particularly relevant at the moment will be the differentiating factor.

And I completely agree about humanity losing control of the future (though most individual humans don’t have much control of the future even now).

Expand full comment
Danimal's avatar
2dEdited

Scott Alexander and Daniel Kokotajlo have written scenarios that anticipate superintelligence as early as 2027! It is a compelling read: https://ai-2027.com/

Expand full comment
Sam Tobin-Hochstadt's avatar

Yes, I've read it. I think the clear evidence from the world is that (a) DL-based AI is converging to the best human results across almost all areas of skill based on language (b) there's no evidence of superhuman performance and clear signs of diminishing returns and (c) the pace of change in everything about AIs impact on the world, apart from model capabilities, is vastly slower than they're predicting.

Expand full comment
David R.'s avatar

I would even caveat a), in that the current models are much more effective in purely "creative" exercises where mashing up words is the name of the game. They seem much worse at technical writing or analysis with an underlying objective reality that needs to be adhered to or explained, especially when a lot of the information in the field is non-public or not documented rigorously. To say nothing of trying to innovate the frontier of human knowledge or capabilities.

Professionals can find their oft-flawed attempts at analysis useful, but they're definitely not yet at the point where they are as capable as talented professionals, or in many fields even as capable as reasonably intelligent first-year entry-level employees.

The context window is still a huge problem, and maybe it will never be anything but a huge problem...

Expand full comment
disinterested's avatar

It's interesting to me that it's mainly the rationalist weirdos, who are, to the man, on the spectrum and have a lot of difficulty understanding and participating in normal social interactions, who think LLMs are so much more than regurgitation machines. Like, they are the *last* people I'd trust to make that judgment call.

Normal folks who just use the things are a lot more circumspect about their abilities and usefulness.

Expand full comment
David R.'s avatar

There's definitely some of this, but also the (wildly overrepresented) software coders are wildly over-indexing on their own field.

Code is literally stripped-down, machine-friendly language, close to the platonic ideal of what LLMs can effectively manipulate. It's directly machine testable and therefore output quality is very easily evaluated at scale.

It is the *ideal* use case for LLMs to deliver some sort of real-world value, basically the only field in which they're doing that at any sort of depth *or* scale, and they're still doing it... pretty meh at best, such that they're not a net positive for anyone who wasn't bad at their job in the first place, from what people are telling me?

I swear half of this is down to the sheer egotism of well-compensated programmers as a class. I literally just said this about programmers in this context five minutes ago: "You aren't this fucking important."

Even if we could replace them all with LLM's within a decade (Press X for doubt), fields like physics researchers, engineers, manufacturing workers, transit planners, construction workers, bloody janitors... all going to require much more work on new architectures and a lot more compute, if it's possible at all.

Expand full comment
purqupine's avatar

I have a strongly held belief that humans are much less creative than we think, we just don't know what's already out there. Early humans (eg Neanderthals) made abstract expressionist art, the Greeks came up with a theory of subatomic particles and evolution, all our literature has historical and cross cultural analogues, all audible musical notes have been sung/played before, etc etc. We are just "creative" relative to what we know to be already existing, but not creative in the sense of actually creating something new, and when we do create something new its generally a .01% variation on an existing theme.

Expand full comment
Matt A's avatar

I think (a) is only true in areas where there is copious amounts of high-quality, highly-available training data. There are a few others where sufficiently complete and concise loss functions can be written such that you can "grade" at scale and improve via simulation or self-play. But there are many, many activities that humans due that don't fall into these categories, and I don't expect generalized AIs to consistently perform well in these areas.

Expand full comment
Patrick's avatar

"I think the last year suggests that the singularity is much further away than people have worried"

What? It is very much the opposite. There is a Moore's law effect going on right now that it is freaking everyone out who is paying attention.

One year ago, the idea that AI Agents will write most code was laughable. 6 months ago it started to sound plausible. The release of Claude Opus LAST WEEK has convinced most of us that it is inevitable. The pace is staggering.

(Also, there are lots of ways this can end in catastrophe that don't involve true AGI).

Expand full comment
Sam Tobin-Hochstadt's avatar

In fact, the way Anthropic is leaning into Claude being focused on writing code shows that they don't think their current approaches are going to produce some general purpose John Von Neumann level genius, let alone something that is to us what we are to cats.

Expand full comment
Patrick's avatar

You don't need that level of genius, though.

If AI could reach the level of a median developer, that's more than good enough, because you can deploy infinite numbers of them in parallel, bounded only by compute/electricity.

That would be close to a singularity event.

Expand full comment
Sam Tobin-Hochstadt's avatar

I think it's obvious that a 100x increase in programmer productivity would not be a singularity event because we have seen that happen multiple times already.

Expand full comment
David R.'s avatar

Non-programmer here: isn't this just describing Python, against a baseline of C++? Or C++ against a baseline of COBOL?

Legit question, but I've heard these described in this manner.

Expand full comment
Sam Tobin-Hochstadt's avatar

Yes, exactly. That's my point. Today, programmers using React and Typescript (which is what Claude prefers) are vastly more productive than people implementing bitmap user interfaces in assembler 50 years ago. It has had a lot of big societal impacts but obviously it was not the singularity.

Expand full comment
Patrick's avatar

I think your claim that we've seen that before is spectacularly dubious.

(No, I do not think any of: moving from punch cards to assembly, or from assembly to compilers, or from compliers to JIT scripting languages, or from Emacs/VIM to smart IDEs were anywhere close to 100x increases, at all).

Expand full comment
Sam Tobin-Hochstadt's avatar

What do you think the productivity increase from the change in programming abstractions over the last 50 years is?

Expand full comment
David R.'s avatar

I'm going to ignore my deep skepticism of this outcome in favor of believing that code-writing is literally the single human endeavor most amenable to LLM automation and this will happen as you describe:

OK, mediocre-to-decent computer code is almost literally free, it costs what electricity+compute depreciation do. We have access to as much of it as we could ever need, induced demand has had its way, every app that anyone has ever conceived of has been written, as of today.

What do we do about literally every other challenge which we face when we wake up tomorrow?

Expand full comment
Patrick's avatar

You are ignoring that there are many thousands of jobs significantly less complex than "software engineer"

You are also ignoring that there are many thousands of jobs that software engineering could make obsolete, but that we do not work on automating away, because *software engineering is expensive*

If I have to pay a software engineer to write code for 100 hours in order to automate something that a person can do manually in 1 hour a week... I will not. It makes no economic sense. I'd take four years to break even, maybe 10, depending on the salary that the person spending an hour a week makes.

If, however, the "software engineer" worked for $.25 an hour? That labor is gone. It'll be automated away. And if I can hire infinite software engineers to work in parallel? Solve for the equilibrium.

By the way, this is already a pretty common prompt engineering hack. LLMs make lots of stupid mistakes, but if you prompt your way into getting the LLM to write code to solve the problem instead, the mistakes drop dramatically.

Expand full comment
David R.'s avatar

You’ve now shifted the bases to a vastly less hyperbolic position.

Yes, infinite passable code would ameliorate some real-world problems and lead to some real-world automation impacts.

No, it would not “be close to a singularity event,” not even close.

I continue to be skeptical of the possibility that even the limited outcome of “code costs basically nothing” is imminent, but can we at least agree that code costing next to nothing would not be the bloody singularity?

Expand full comment
disinterested's avatar

I've been using Claude off and on for several months. It writes a lot of code, yes, but the vast majority of it is unusable trash, and the vibe among most people I work with is that it sucks all of the joy out of the job of coding even when it does do something right. I think a more likely near-future is that rank-and-file senior developers rebel against using it in most cases and only the lowest-value and lowest-skill devs actually use it more than occasionally, but since they are bad at their jobs, the code they produce with Claude is bad too and this is all very self-limiting.

Expand full comment
Patrick's avatar

I have seen this argument a lot lately, and often they are just ignoring how rapid things are changing. I'm also not really receptive to arguments like "it writes a lot of code". Who's in charge, you or the magic genie?

Like, have you used Claude Code with Opus 4? I was not kidding when I say that things changed last week, I meant it literally. It is orders of magnitude "smarter" for lack of a better word.

And finally, whether or not you have joy out of the new way it's done is irrelevant, isn't it? It's like arguing whether it is important that automechanics is more joyful than blacksmithing. The more efficient methodology will win regardless of your preferences, or those of any of your colleagues.

Expand full comment
disinterested's avatar

> Like, have you used Claude Code with Opus 4?

Yes. My company had early access to it. I know of what I speak.

As to your second point, no. If the people who maintain the code bases don't want to use a tool, the tool will not get used.

Expand full comment
dysphemistic treadmill's avatar

The last time that we faced the problem of making sure that the gains from an economy-wide disruption were fairly distributed and not simply funneled to the top was with the general opening up of global trade, and that worked out okay, so we should be fine.

Expand full comment
Thomas L. Hutcheson's avatar

At the level of should we do it or not, it WAS fine. We did not need some super clever "thinking" about how to reduce restrictions on imports.

Changing taxes to make them less progressive and create deficits would have been bad with or without trade policy changes. Ditto all the anti "Abundance" things K-T talk about. Ditto health insurance costs that drive a greater wedge between the employer's cost of employing people, especially lower paid people, and wages received is a problem.

Expand full comment
David Abbott's avatar

Humans are less likely to live in poverty than at any point during the history of our species. It worked out well enough. It might have worked out better if we could have European style social guarantees without European style strangling regulations. I’m afraid those two things are highly correlated. Look at Canada, its GDP per capita trajectory is more European than American.

Expand full comment
Patrick's avatar

"That worked out okay"

For whom? It's fine to say that it is worth it. It is incorrect/disingenuous to claim it was painless.

Expand full comment
MondSemmel's avatar

This article makes the same mistakes it accuses the politicians of making. It speculates about a boring "normal" AI future but doesn't actually argue for why such a "normal" future would come to pass, rather than the more extreme futures predicted by both AI "doomers" and "boosters".

The pitch from both doomers and boosters is straightforward: one main reason why Earth is dominated by humans, rather than some other species, is our intelligence. Wolves have sharp claws and fangs, but are ultimately no match against us despite our squishier bodies. Conversely, there's no species more intelligent than us, so we're at the top of the food pyramid. And humans themselves can't become much more intelligent over time due to biological limitations, e.g. us humans are already born with underdeveloped skulls because bigger skulls wouldn't fit through the pelvis.

But now we're developing artificial intelligence, which is improving at a rapid pace. And such AI won't have the same biological limitations as human brains. Therefore, once AI reaches human intelligence, there's no a priori reason to expect it to stop at that level, rather than rocketing past it. And once that happens, our future is necessarily going to be crazy in *some* way, whether doom (my prediction), utopia, dystopia, or whatever.

The pitch from those who envision a "normal" AI future (where e.g. technological unemployment matters in the long-term) is that this won't happen, but I don't find any of the justifications remotely satisfying. Some claim there are ineffable characteristics of human intelligence that cannot be emulated artificially (yeah, right), or that there's an impending technological slowdown (not impossible, but also surely not the default assumption given rising investment levels and performance on benchmarks), or that nothing important about our lives would change even if entities vastly smarter than us came to be.

Well, nothing forces reality to stay normal. We all know that astronomy makes predictions about the long-term future of our universe that sound like they come out of a sci-fi book: like our sun eventually running out of fuel, or the heat death of the universe. So there will be a point in the future when the genre of our universe shifts from (say) slife-of-life to sci-fi; the question is not *if*, but *when*. AI doomers and boosters are saying: that's happening right now. What's your counterargument?

Expand full comment
SilentTreatment's avatar

The doomer argument maps “intelligence” directly onto “power” in a way I think is pretty flimsy. There have been homo species around for a million years at least, with comparable intelligence to modern humans, yet as a group we’ve only mastered our environment for a tiny fraction of that time.

And that mastery was a side effect of trying to secure resources for biological/social drives that AI does not have. So I think human behavior is a bad analogy when thinking about how a “super intelligent” non-human agent would act.

Expand full comment
MondSemmel's avatar

I like the framing of intelligence as optimization power: For a given goal, the more intelligence you have, the less tools and resources you need to accomplish it. Human intelligence gave us language and tools, and individual intelligence plus civilizational knowledge (books etc) plus tools allowed us to transform our environments. An entity with vastly higher intelligence would require vastly less resources to accomplish the same thing, and conversely, if it had the same resources as us, it could accomplish vastly more.

Re: drives, AI would not have the same terminal goals as us, but there are lots of universal instrumental drives and goals. To accomplish an arbitrary goal, any entity needs to (or benefits from): survive, gain power, gain resources, acquire knowledge, etc.

Expand full comment
Ben Supnik's avatar

I think you're bringing up something much closer to how I feel about this, which is far from where MY is going.

The big question isn't going to be "how does all of this AI disrupt...everything."

It's going to be "what do we do when we as a species are literally not intelligent enough to understand and introspect parts of our own systems."

Expand full comment
Kenny Easwaran's avatar

My “counterargument” (not exactly a counterargument) is that “intelligence” is not one thing. Any kind of information processing that can be useful for achieving goals counts as a kind of intelligence, and while humans have some much more general such skills than other animals, we don’t outperform all of them on individual skills. This is part of why we still have working animals, like border control dogs and search and rescue dogs, and not just humans armed with sensitive chemical detectors.

This suggests that we don’t end up in a technological singularity when some particular general AI system outperforms us on all fronts. But it does still say there’s a lot of nearby sci fi futures, where humans are suddenly in the situation horses were in 1915 - still around, and still somewhat economically relevant, but nothing like the way we had been.

Expand full comment
MondSemmel's avatar

We have evidence that "intelligence" is one thing, like the fact that the general intelligence factor g measured by IQ tests strongly correlates with a whole bunch of other positive traits, like EQ etc. Also, AI labs have the ambition to exceed human intelligence in all domains, not just some. Your perspective requires a positive claim that they won't succeed in this.

Re: border control dogs: while they may have not just a particularly sensitive sense of smell but also a brain specialized in interpreting this sense, I don't see why we'd keep them around for this purpose once we could make sufficiently cheap artificial copies of their noses and then feed this sense data into a general-purpose computer.

Re: the 1915-style humans-as-horses future, you only get even that if intelligence and human labor are not "too cheap to meter": us humans require a de-facto minimum wage high enough to feed and shelter ourselves, and as soon as AI and robots can underbid that, we become economically useless.

Expand full comment
Kenny Easwaran's avatar

I think all the stuff about g is deeply problematic, but even if you take it all as granted, it just shows that these skills correlate *for humans*, not in general. It may be that humans have a small number of tricks that we use to do everything so that people who are better at one cognitive skill tend to be better at all the others, but that doesn’t show that other creatures have the same correlations among the same tasks. We already know that, for instance, being able to pass the bar exam correlates highly with the ability to cite real court cases in legal filings among humans, but computers vastly overperform humans on the bar exam while being much worse at citing real court cases. Some people want to use this to argue that current AIs are not actually intelligent, but I think that is just as wrong as saying that current AIs are more intelligent than humans. I think they just show that measures of intelligence for humans don’t work the same way for other intelligences (as we already should have known from thinking about which cognitive tasks animals can compete with us in and which ones they can’t).

Expand full comment
MondSemmel's avatar

How does any of that matter to the question at hand? However you define or describe human intelligence, it was powerful enough to conquer and industrialize the planet. Then if an entity came along that was 1000x as smart as humans and thought 1000x as quickly, what would it matter that some other creatures like border control dogs have slightly different brain architectures? Dog brains may be optimized to analyze smells, but surely there are less efficient algorithms that allow a general computer or general intelligence to analyze smells as well, or even better. Then if the computer or intelligence is powerful enough, the dog becomes irrelevant.

Expand full comment
Kenny Easwaran's avatar

The dog is not irrelevant. The dog is still employed precisely because they are relevant.

Obviously, the role of the dog is not extremely impressive in the modern economy. I want to stress that my denial of AGI and superintelligence is not a denial that AI will quickly become the most important thing in the economy - but it is a denial that humans will be completely replaced because AI is literally better at everything. It is a denial that “1000x as smart” means anything precise. (I accept that it might be a useful exaggeration or idealization, just as you might say humans are 1000x as smart as chimpanzees, despite most humans not being able to figure out how to survive if you drop them in a random forest outside of civilization, where a chimpanzee would figure it out.)

Expand full comment
Jeremy Fishman's avatar

I'm more sanguine than you - a sufficiently advanced intelligence eventually reaches an insight effectively equivalent to buddhism and equalibriates around a serene indifference to quotidian pursuits. A lot like the computer in War Games - the only way to win is not to play. Desire is the source of suffering - an advanced intelligence is probably pretty chill, it has explored all outcomes and concluded that the hamster wheel of human existence is a meaningful lesson in what not to do. The bigger risk at the singularity is the possibility that the AI stops caring and tunes us out entirely.

Expand full comment
Marc Robbins's avatar

I dunno. The chess program I use is a lot smarter than me at chess but I still enjoy playing chess and chess playing programs have not risen up to overthrow the chess world. AI can be very smart and be very dumb at the same time.

Expand full comment
Substack Joe's avatar

I appreciate your highlighting the need for effective policy entrepreneurship on these near-term AI topics. I started my career wanting to focus on AI implications in 2011 (not super lucrative timing in rural nowhere) and then routed to public health and have finally gotten back to a position focused on AI x healthcare/health sciences.

Part of wrapping your head around effective policy-making in this space is understanding that impacts can be subtle but insidious. Take, for example, the use of AI for care denials or juicing risk adjustment scores in Medicare Advantage. That’s a lot of money misused from a level of scale AI enables. You can (and will) see this in deepfake based scams too.

Another part is a broader recognition that, as a general purpose technology, AI isn’t a thing you can bucket and treat as its own topic. You have to think of it as applied and interacting with various settings.

The current Think Tank-osphere environment seems to bolt the AI question on to one of two things: 1) catastrophic risks largely powered by EA money and defense or 2) equity issues. I’m not sure hammering on both of those points, though both are worth considering, really move the needle much.

Anyhow, great article!

Expand full comment
Kenny Easwaran's avatar

Yes, yes, yes - the bifurcation of this into the “safety” and “ethics” camps, and the specific focus of each, is missing a lot of what matters.

Expand full comment
Ethics Gradient's avatar

Disagree in the material sense. The "Safety" side is dealing with negative EV of, say, ten trillion. Economic dislocation with (gross, not net) EV of, say, negative one million (to use your preferred parity of -illions, 10 trillion versus .000001 trillion). "Ethics" with, idk, negative one thousand or something (.000000001 trillion). Even if you think that economic dislocation is, by "normal" problem standards (say these are on the order of ten to a hundred thousand), something that matters a lot, it's not a place where it makes sense to focus marginal effort or attention as long as that marginal effort or attention is at least one ten-millionth as helpful as it is focused on safety concerns.

Expand full comment
Tim Huegerich's avatar

If different worries about AI are complements, as they appear to be ( https://thezvi.substack.com/p/worries-about-ai-are-usually-complements ), more attention to something like job loss which is easier for policymakers and the general public to understand may actually improve the chances of progress on safety.

Expand full comment
Ethics Gradient's avatar

A good point, although I think it's relevant that the linked paper shows causation in the other direction (in the sense that X-risk concerns don't diminish immediacy concerns).

More substantively I worry that this genuinely is one of those situations in which marginal attention paid to productivity shocks mostly just does trade off against X-risk concerns (but I may be wrong about that and that would be great. If so, carry on with "mundane harms" concerns everyone.). In particular, the solution-spaces to the two problems just have extremely little if any overlap, and focusing on stuff like tax code adjustment risks being a sort of keys-under-the-lamppost or bikeshedding situation for legislators much more used to that class of issue than "novel existential threat in the form of a superior species that capitalism wants to give all the control to." Against that hypothesis I suppose that anything that increases awareness of "holy #@#!, it can do *WHAT?* on capabilities" might be a boon to getting X-risk taken seriously.

Expand full comment
Tim Huegerich's avatar

Right, I forget that I'm an outlier in thinking of "shut down AGI research indefinitely" as the proper response to the likelihood of mass unemployment. The two causes are aligned in the Keep the Future Human approach, but not necessarily otherwise.

Expand full comment
Substack Joe's avatar

To the point of complementarity, I found this paper valuable: https://link.springer.com/article/10.1007/s11098-025-02301-3

I worry that the political instability hastened along by problems traditionally labeled as “ethical AI” problems makes tackling the larger AI safety collective action problems more difficult.

Expand full comment
A.D.'s avatar

I would say also, the safety camp can be more easily handled by technologists themselves, where the "normal" disruption will need politicians.

Expand full comment
Ethics Gradient's avatar

Many of the technologists are not especially responsible and all of them suffer from a coordinated-action problem of the type that government exists to handle. Maybe if Anthropic were the only frontier lab, but it isn’t.

Expand full comment
Kenny Easwaran's avatar

I think Matt and many others likely are able to be more than a million times more effective in thinking about interactions of AI with economics than the safety issues. Especially when you look at the marginal effect of adding one more person to some of these things. (I suspect that me continuing to do my work in formal epistemology and decision theory will be more helpful to both topics than me trying to devote my attention most directly to safety or economic implications of AI.)

Expand full comment
Ethics Gradient's avatar

I would disagree on the facts here. I think that lawmakers (1) listen to Matt in a way they don't listen to Zvi Mowshowitz or Eliezer Yudkowsky, and (2) are enormously unaware of the materiality of existential risk concerns relative to thinking of this as "yet another essentially in-distribution productivity shock." We're still at the stage of 'no, seriously, this is a real problem you have to take seriously. Capabilities gains are an X-risk in and of themselves' even if exact AI limitations policies probably benefit from expert weigh-in in the technology sector. Vance has apparently some familiarity with AI 2027 (good) but doesn't take the ideas seriously (bad), but he's way ahead of the curve in even knowing more than zero about the field.

Conversely, while I think Matt is very good at *raising awareness* of AI and economics concerns (and by the same token would be good at being a respected normie voice for 'X-risk is a real problem, not something to be blithely dismissed for sci-fi. We should be coordinating with China and everyone else on capabilities bounds"), I'm not sure that his VORP is all that high on appropriate policy approaches to "normie" AI productivity shocks than, say, Larry Summers or Paul Krugman or [insert politically relevant center-left academic economist here] would be. While I strongly agree with his top-level thesis that most if not all current political concerns will be subordinate to AI issues by the 2028 election (assuming we haven't died in the interim....), and I think that his awareness of the policy paths that matter to accounting for the non-doom scenarios is certainly far better than the *median* American, I don't think it's the point of highest marginal EV leverage even if I grant that he's a more natural presenter of neoliberal economic wisdom than he is of X-risk.

Expand full comment
Kenny Easwaran's avatar

This is a fair set of points. I will agree with you that Matt should spend some time talking about this, and enough so to emphasize it’s real and he means it. But I do think there’s diminishing returns to that if he doesn’t keep up the discussion on more normie economic issues - he would run the risk of becoming another Eliezer Yudkowsky Cassandra that is easy to ignore.

Expand full comment
Ethics Gradient's avatar

I'm sure the irony of Cassandra being right about everything is not lost on you :P.

Expand full comment
Ken in MIA's avatar

“You can (and will) see this in deepfake based scams too”

Why would AI-based scams need different laws from what’s already illegal (or highly regulated) today?

Expand full comment
Substack Joe's avatar

I’m thinking more of the enforcement resources and technological capacity to understand the sophistication and scale of better deepfake technology that is more accessible.

Expand full comment
Ken's avatar

You kinda brush past the ex-risk stuff but if you think it's plausible then seems well worth your time to write some policy ideas that may be even marginally useful in that space

It is exactly because it feels kinda embarassing for people in your position to be talking about doomer stuff that it would be useful, again conditional on you finding such outcomes plausible.

Expand full comment
Deadpan Troglodytes's avatar

I think Matt wrote the exact article you're asking for back in 2022: [https://www.slowboring.com/p/the-case-for-terminator-analogies]

ETA: I don't mean this dismissively. It would probably be good to reprint this once in a while, with or without an update.

Expand full comment
Mo Diddly's avatar

Matt wrote about the doomer vs optimist paradigm here:

https://www.slowboring.com/p/what-the-ai-debate-is-really-about

Expand full comment
Polytropos's avatar

This might be kind of unpopular here, but I do think that politicians need to seriously engage with the X-risk stuff, and that JD Vance’s accelerationist take on the question is by far the worst aspect of his overall AI stance.

The leading AI labs (OpenAI, Anthropic, and DeepMind) are fairly seriously committed to the goal of creating artificial superintelligence. They still seem to be a few architecture or training methodology innovations away from it, but it’s increasingly clear that the core of intelligence— identifying higher-level patterns or abstractions from informational input and building a sense of the relationships between them— is machine-replicable, and that beyond-human artificial intelligence is possible.

If we create this sort of entity, humans will almost certainly lose control of the future. If we want to lock in good outcomes, we have to do that before we get to that point.

Expand full comment
David R.'s avatar

If we create this entity and it proves to be self-directed, then I would argue that basically nothing we can do could possibly durably "lock in good outcomes." If we give it any sort of power, we are left hoping it has some kind of thought process and drives we can comprehend and is "well disposed to us", and we will be forever.

The correct answer, if we are able to craft something that is top-tier human or mildly superhuman in performance and capabilities (intelligence is such a fluffy, useless word in context) across all domains, has a will of its own, and is infinitely scalable, is probably to pull the plug and shoot everyone who knows how to do it, then craft a bunch of propaganda and hope our descendants remember why we outlawed this entire branch of endeavor in perpetuity.

I've seen no good, non-handwaving arguments as to why this is possible, let alone particularly near; my fear is that we get good enough over the coming century at automating both physical and knowledge work, while still needing people in the loop and driving the system, that we end up creating a post-work, post-scarcity economy but current-day economic and political thought prevents us from realizing it broadly. We'd be saddled with an obsolescent system of markets and capitalism that results in a massive underclass, narrow ruling elite with fantastic levels of power, and small class of experts and small capital-owners in between.

Expand full comment
Polytropos's avatar

I don’t think that alignment is impossible, but if we get close enough to superintelligence without accomplishing it, I basically agree with the Butlerian Jihad approach (eg: upholding Big Yud thought and bombing the data centers)

Expand full comment
David R.'s avatar

I guess, if whatever we create is operating not too far beyond the frontier of what smart humans are capable of, we "treat it like a child or a loved one and give it plenty of agency to do things it finds interesting" is probably going to produce decent results in the near-term, but unless there are some hard limits on complexity/scale that prevent "intelligence" from getting much beyond that, it's eventually going to diverge from us in unpredictable ways.

To be clear, I don't really think any of this is a near-term problem because I don't think we'll hit a point of having created such a being for centuries, or possibly ever, but as a thought experiment I think there are good reasons to believe "alignment" to be impossible long-term.

Expand full comment
AlexZ's avatar
2dEdited

I think there's no realistic path to stopping this, though. The basic technology to accomplish it (GPUs, or just dense transistor arrays, really) is super well understood and assembled from commodity raw materials. It is difficult to make chips well and at scale, but also quite easy to hide that you are doing so (a chip fab is basically like any other factory). There is no regime we could agree on that would ensure a "trust but verify" scenario between the major powers like we have for nuclear weapons. So even if we could, internal to the US and China individually, agree within our respective societies that "AI bad", there is no way we could be confident that the other party is not breaking the agreement to not pursue this tech.

The upshot is that we're probably cooked; maybe the AI overlords will be benevolent, but I seriously doubt it. We're headed to a world with multiple competing AIs, and "not giving a shit about humans" is a serious competitive advantage. The most pro-human AIs will be selected against over time. Even if we get alignment right to start on the first most powerful AI, it seems highly unlikely that one of the first 100 competitors isn't misaligned, an advantage that will allow it to quickly advance past its aligned rivals.

Expand full comment
Tim Huegerich's avatar

It's possible to stop it if we want it badly enough. Fortunately, public opinion is already on the side of wanting it to stop, even before any major disruptions. That sentiment will harden as AI becomes more salient to normies, and we should be ready to channel it into effective policy. Here's one outline of how: https://keepthefuturehuman.ai/chapter-8-how-to-not-build-agi/

Expand full comment
AlexZ's avatar

I think this fundamentally skirts the central question: GPUs are an accelerant to AI development, but not necessarily a requirement. And chip fabrication technologies are trivially easy to hide. Short of every industrial power agreeing to massive surveillance of its entire industrial base, it would be fairly easy to develop advanced chips on relative secret. And what happens if you are caught doing so, or suspected thereof? Your rivals will just ramp up development to counter you, and we're right back where we started.

Expand full comment
Tim Huegerich's avatar

Are chip fabs "trivially easy" to hide? As footnote 7 in my link explains, "the machines required to etch AI-relevant chips are made by only one firm, ASML (despite many other attempts to do so), the vast majority of relevant chips are manufactured by one firm, TSMC (despite others' attempting to compete), and the design and construction of hardware from those chips done by just a few including NVIDIA, AMD, and Google."

But fair enough that special chips are not necessarily a requirement. The chapter acknowledges that and proposes complementary measures. Those may require more intrusive surveillance, but the idea is that such extreme measures are worth it once you understand what is at stake.

Enforcement here is not meant to be carried out by rivals, but rather via the full coercive power of the state.

C'mon, if the alternative is "we're cooked" as you say, we ought to act as best we can. The nuclear monitoring regime provides one proof-of-concept, for an arguably more challenging context.

Expand full comment
AlexZ's avatar

I'm not saying we shouldn't try, I am saying I'm skeptical of success.

"Enforcement here is not meant to be carried out by rivals, but rather via the full coercive power of the state." The rivals in question are themselves states. The question of "how do competing states enforce rules and binding agreements upon one another has been vexing diplomats for roughly 5000 years now.

We have uncovered a new basic technology (the transformer, essentially) that operates on basic computation. Cutting edge chip fans, EUV machines, etc etc are merely levers to (sometimes greatly) magnify the effect of this algorithm, but its use is already baked in.

Expand full comment
David R.'s avatar

ROFL.

Clearly, I should have written the "if" with which I started that post several dozen more times in all caps.

Seriously, just a few more ROFLs for good measure: ROFL ROFL ROFL.

I just can't even...

Expand full comment
James's avatar

> Today, looking at the country as a whole, houses burning down doesn’t seem like such a big issue. But one reason house fires aren’t a big issue is that people came together, mostly via the government, to form things like fire departments and to write codes to make buildings safer.

Secret two staircases good take?

Expand full comment
Michael Sullivan's avatar

There's an Animal Farm joke here.

Expand full comment
Quinn Chasan's avatar

The rules need a rewriting before we can take advantage of AI. One thing that's clear in my decade+ of work with the Federal Gov on IT policy is that agencies are islands of capability, and AI and technological modernization more broadly are best used across those siloes. Rather than agencies being vested with total control over their operations, the GSA/OMB group that eventually starts stitching these systems together needs technical supremacy over their agency users to realize half the stuff in this post.

The AI revolution should be thought of as a personalization revolution. What processes that get handed off between scores of bureaucrats with reams of their own rules can simply be reduced, automated, and targeted at individuals of need rather than trying to slap policy benefits towards large amorphous groups?

If a trucker loses a job due to automation, the next job he gets will be dependent on his personal skillset, not some bumped COBRA benefit that may help him look for work a week longer than he otherwise would have. The whole government is like this, looking at the aggregate as they have to for macro policy reasons. But with AI that paradigm should be reversed, and individuals should be led towardss the most useful policy tools to help their individual situations. Group level policies only are there for the AI to understand what is useful for each individual.

This mirrors industries personalization shift as well over the last decade and a half. I've written about this elsewhere but the Biden EO on Customer Experience just tried to reinvent the field from scratch rather than use any of these methods above. I agree it's time for new blood and new ideas.

Expand full comment
Kenny Easwaran's avatar

Personalization (and particularization) seems important in a lot of ways here! We have a lot of regulation that says precisely how wide and steep a wheelchair ramp can be, precisely how high a building can be, and precisely how many staircases a building must have. As a result, builders find the cheapest way to meet the regulations, while making the most revenue, and the results probably aren’t great at achieving the goals of the regulation. There are often variances that could enable a particular project to provide better access for wheelchairs, better light for neighbors, better emergency access, and also better rentable space, but would technically run afoul of one of the rules.

If you had a zoning board with expertise in all of these subjects, that everyone could trust not to be captured by particular interests, they could help negotiate these variances to achieve good outcomes. There’s a dream world of regulatory AI that would help find this sort of outcome in a trustable way (even though it wouldn’t be “explainable” in the way that each line of standard regulatory rules are, despite the total being thousands of pages long).

Expand full comment
Hiram Levy's avatar

Hello folks. As someone who spent much of his spare time building his own homes (4.5 roughly), I am rather surprised that no one else was amazed by the productivity graph Matthew showed at the end. 50 years of DECLINING productivity in construction. I realize that the column was an A.I. post, but I found the productivity graph amazing. There must be a strong link behind the productivity decline in construction and the "housing shortage". For me it was an enjoyable hobby, but what is wrong with the actual construction industry?

Expand full comment
Kenny Easwaran's avatar

Brian Potter at the Construction Physics Substack had a lot of posts trying to explain the lack of productivity increases in construction in his first year or so of posts. He summarizes and links to a bunch of them in this recent summary post: https://www.construction-physics.com/p/50-things-ive-learned-writing-construction

Expand full comment
Hiram Levy's avatar

Thanks for the link. All I can figure out is that home building really is a craft, unlike a lot of mass-produced things such as autos where at least the quality, reliability and safety are vastly improved. Crafts are labor intensive per unit of production. I guess my surprise was the drop in productivity, but I am not sure what the unit of measurement is. I do know that when I built my homes as a hobby, the materials cost were really quite modest, relative to the eventual assessed value of the final product.

Expand full comment
David R.'s avatar

Productivity per hour has seen slight increases, but hourly wages have risen in tandem with productivity in other sectors and thus more than offset them.

Added to that, more hours are spent on tasks which weren't done in the past, some of which are regulatory mandates and some of which are desired by the market. Compared to the 1950's baseline against which we usually bitch that productivity is down, these include much more comprehensive plumbing and electrical systems, nicer/fancier kitchen and bathroom finishes and customizations, nicer and larger garages that don't count as finished space, more sophisticated waterproofing systems for basements, more and better insulation and air-sealing and venting and climate control systems, framing tiedowns/hurricane strapping/seismic bracing, better electrical grounding

We've more than eaten any meager gains in on-site productivity per task with both increased regulatory mandates and hugely increased market demands for exactly what constitutes mid-market housing.

Expand full comment
splendric the wise's avatar

Construction has become massively safer over the relevant timeframe. I’d guess all the improvements in tech and process got eaten up in increasing worker safety.

Flat real costs aren’t enough to explain the housing crisis.

Expand full comment
Mike's avatar

I think it’s useful to compare the AI revolution to advances in agricultural and manufacturing technologies. They weren’t automated, millions of Americans still work in agriculture and manufacturing. But a much smaller share of the workforce works in those 2 fields, and contrary to populist narratives the biggest reason for this is technological innovation, not outsourcing. I expect an analogous decline in the share of workers in while-collar work in the next few decades.

To Matt’s Baumol point, this will continue the increase in the share of workers in jobs where the product is a human service - think health care, elder care, child care, teaching, wait staff, etc. Many of these jobs could theoretically be automated, but people prefer interacting with a human and will pay a premium to do so. As more workers shift to these labor-intensive low-growth sectors overall economic growth will slow (read Dietrich Vollrath’s Fully Grown).

What strikes me about these types of jobs is that in many cases they’re better done by volunteers/loved ones. Elders would rather be cared for by a family member than a paid health aide, some evidence suggests parental care for infants provides better outcomes than daycare, having Thanksgiving dinner at a loved one’s house can be more enjoyable than going to a 5-star restaurant. I’d even argue evidence suggests that for elders with a decent baseline of care, having more friends and meaningful relationships can have a larger marginal impact on health than additional paid health care.

If society was run by an omnipotent central planner, we could probably increase utility by reducing the number of hours spent on paid care, and allocating those hours to creating stronger social institutions of the Putnam/Skopol variety. To put it pithily, if Americans consumed fewer paid services but were less lonely we’d be better off.

But how we achieve this with public policy is a much more difficult question to which I do not know the answer.

Expand full comment
Ben Supnik's avatar

I would add "the future after these revolutions might be inscrutable or at least totally unpredictable to people who lived fully before them?" (They didn't happen within a generation, but imagine a hypothetically transported observer). The very structure of society would be completely different.

The up-side of AI is "cognitive work becomes cheap and plentiful." I am payed well because I can do certain symbolic things with my brain that a lot of the population can't...my labor is a scarce resource, supply and demand, bla bla.

I am a computer programmer, and I am watching the AI master those symbolic tasks at good clip. They're still often bad at it, but if you'd asked me five years ago "can robots do X" I would have said "no, duh." I wonder now if this is like Kasparov vs Deep Blue - I can nitpick the chatbots code now and in two years this will be silly.

I don't think we know what a society where cognitive labor isn't restricted by population looks like - we've had the equivalent of tools but not machines.

Expand full comment
Marc Robbins's avatar

Good point about Kasparov vs Deep Blue but given that now, many years later, people are more interested in Magnus Carlson playing other humans than the latest chess super programs playing each other teaches us that it's not that easy to predict what will happen in the future.

Expand full comment
Ben Supnik's avatar

The point that we still watch the hoo-mons is well taken.

And FWIW in the world of "making things", "it was hand-made in a very inefficient way by a human" _does_ still command a premium!

But...most of the stuff we had is made by the most efficient process possible (which is great, it's why we have stuff) which doesn't bode well for those of us making things using our brains that people want at the lowest price.

Expand full comment
Marc Robbins's avatar

There will be a huge and painful transition. But humans still like human-connected things and won't want the AI version. It will simply take time to figure out what those things are.

Expand full comment
Kenny Easwaran's avatar

This seems like a really insightful combination of a bunch of ideas I’ve thought of separately but never put together before.

Expand full comment