During 2024 the Discourse repeatedly centered on the vibecession, despite the fact that while the American economy of 2024 had persistent higher than target CPI, it also had genuinely widespread growth across sectors and wage gains across income brackets. If anything, wage gains were strongest in the lower brackets, as income inequality was declining.
Those who defended the idea that a recession was happening despite the indicators was that the indicators were bullshit and people were suffering in real life.
I can't help but think that line of reasoning is much more appropriate now than it was then given how concentrated economic growth is in a narrow sector. Not only that, labor and wage indicators are genuinely flat/hinting at trending in the wrong direction. It's clear to me that the AI boom is propping up the top line while the rest of the economy is slowing, as you call out here.
It's just maddening how Trump has this Defying Gravity (my five year old is on a Wicked kick) quality to him, where nothing quite bad enough to penetrate the daily experience of the median American seems to happen while he's around. Pandemics are a weird case where both it's tough to blame someone for them AND as Lindyman argues, society memory holes the experience. I don't want mass suffering, but goddammit I want something to happen that is absolutely attributable to the mad king and everyone knows it.
Biden’s mistake was that he front-loaded all the economic goodies into the beginning of his term. 2021 was actually the best economy ever and people were disappointed when 2024 was worse. Would’ve been better to stimulate more evenly thoughout his term.
As someone who's unemployed and who's seen near constant layoffs and tons of people looking for work in my sector (advertising) and adjacent ones (tech, video production), I'm sure feeling the 'cession vibes and have been since 2024.
Maybe we need to move to the AI companies. I'm applying for some and I know folks doing some freelance work there. But if that is a bubble, then watch out.
We had the inverted yield curve for a long time and now it's gone back to normal. Recessions usually start sometime after the yield curve flips back up. Gold is on a tear and underlying economic fundamentals are weakening. Also, if Matt didn't point this aspect out, some are not sure the A.I. build out can actually turn a profit. If A.I. rolls over there's not a lot of backup in the system to keep the economy from sinking.
Re: profit. It’s an interesting question. Fundamentally I do think it’s like the dot com bubble - there will be winners and losers and the winners will make a lot of money. OpenAI generates a lot of revenue already. Amazon also generated a lot of revenue early, many years before turning profitable. Obviously not always the case but I would bet that some very profitable AI companies will exist in 10 years.
Based on the historical example of Amazon it might be a GOOD sign if an AI company has high revenues but is "unprofitable" because it's reinvesting that revenue instead of paying dividends. There's not too many long-term thinkers around and they can really run (very slow) circles around more rash competitors if they're smart in their approach.
Really, though, how many cycles in the computer age have we gone through of "I'd like to see how they manage to make money with THIS"? Because every time they end up making an absolute crap ton of money. Search, video, social media... all wildly profitable. (News, well, that's a different discussion...)
The idea isn't that every company doing something will make money, it's that one will eventually do it well enough to make money. The metaverse is a joke and basically Zuckerberg's Spruce Goose. But somebody is going to be counting their twelve figure fortune off VR/AR one day, you can set your watch to it.
(I don't think 3D TVs will ever really be a thing, largely because AR is already capable of doing what it does better.)
IMHO the moat between US and Chinese AI firms is far bigger than within the US and China:
You could make the argument that OpenAI is trying to be Facebook, productizing their models and building the thing that will be the next social media.
Anthropic is trying to be Microsoft, getting into enterprise software through AI coding tools and bespoke models.
Google is... trying to be Google and leveraging their scale to offer inexpensive AI across all of their products to keep them sticky.
Microsoft and Amazon seem interested in providing infrastructure.
Meta seems to be throwing money at the problem and hoping that they can grab the next trend (e.g., AR glasses) and use it to sell more ads.
Personally I think the DGX Spark is evidence that Nvidia sees Apple as a competitor in AI research---at the "I fine-tune models on my laptop for production use in a data center" level.
None of these are clear-cut---Google also does infrastructure, Microsoft is developing frontier models, etc.---and they are all competing with each other vigorously. But these are all examples of things China won't be able to compete on in the US. They are cranking out open weights transformer models and really impressive image/video generative models that absolutely compete at the "we need cheap AI solutions for some specific business problem". But no US company is going to run models on Chinese data centers for US customers or expose IP to Chinese coding agents; Americans aren't going to use a China-based social network; etc.
They are betting on compute being a moat, plus network effects (story of Googles success more or less). May not work, especially if hyper scaling stops being effective, but that’s the idea.
I think Matt is pointing out that you cannot time it.
What if there is a recession now with a quick recovery, and business is booming again by 2028? Then your “just run on the bad economy” strategy will be fun.
What if AI props everything up and the recession is just middling? What if the recession doesn’t happen until 2029? What if it starts in 2028 but people aren’t really feeling it yet?
Is it possible the 2024 economy was better for Democrats and less good for Republicans? Ex. Covid seems to have resulted in people at the bottom of the wage distribution getting raises or new jobs with higher pay, meanwhile there was inflation hitting everyone. Or maybe the causality goes the other way and people who were hurting decided to vote for Trump…
The one way the AI boom goes bust that you didn't mention: companies realize that the productivity growth they're paying for (through staff reductions/efficiency gains/something else) doesn't approach the amount of money they're spending, and start balking. Right now a lot of the AI attitudes I'm seeing in business center around "we have to do this because our competitors are doing this and we can't be left behind." But you're starting to hear more public grumblings that the emperor has no clothes.
Noah Smith had a good column recently that AI doesn't have to completely flop to burst the current bubble, just underwhelming customers might be sufficient even if long term the top players settle into being massive, profitable, but fairly ordinary tech companies.
The railroad crash in the 1800's seems like a good model to keep in mind. The railroads absolutely were useful and a significant transformation of the economy, but enthusiasm got way out ahead of the returns and a lot of shady financing caused real economic pain for the economy. We will see.
And that phenomenon as has been mentioned often now had the benefit of leaving behind a very positive infrastructure legacy whereas the hardware build-out happening now has a lifespan of a few years (?).
I think one reason that railroads were so successful for the country is *because* people would over-invest and lose their shirts, but then a new group would show up with these relatively cheap railroad assets and then use them to do cool things. Similar with fiber rollouts in the early aughts.
Didn't this also happen with the internet? Startups going out of business laying undersea cables and stuff, later bought up by the very telecoms that everyone said was at risk of going out of business? I assume in the future AI will be an add on to our phone bills.
Also an industry, like AI, with v high capes but low marginal unit cost. Tough business model. Airlines another example and they are constantly going bankrupt.
The problem with a capex-heavy, low marginal unit cost business is that you need two skillsets to run that business well, and they are entirely opposite in terms of personality and skillset.
One side is about raising capital and managing a complex set of loans, bonds and equity finance, which calls for either brilliant financial engineering or charismatic investor relations (or, ideally, both).
The other side is about recruiting customers, pricing carefully (price discrimination, price competition, etc) and about a relentless focus on cost control.
So one of these is either an extreme nerd, or else someone who is super-charismatic to investors; the other is a penny-pincher on costs and a nickel-and-dimer of the customers.
Well that and in a competitive market pricing ends up getting pushed to the marginal cost of product. So the business is “spend a bunch of money to sell products at cost”!
You usually see either concentration into monopolies or lots of failing businesses, depending on whether the market stays competitive.
That economic logic is also why utilities are heavily regulated —government sets profitable but not monopolistic rates and (try) to address service quality and access through regs.
Airlines are interesting because they were all built under more of a utility model and then switched to competitive pricing. Rates came way down, and the airlines have all been failing or concentrating since (with the occasional low cost carrier emerging when the arbitrage opportunity is enough to justify the capex).
Being jaded and bored by today's AI models, which would have blown literally anybody's minds five years ago, before they're even really implemented in our economy or integrated into our society would be the most 21st-century-America thing as hell.
This is my read but I worked on some data center financing deals and I see the fallout hitting differently - I think AI losing its sizzle will cause big issues in the private credit market. My take is that the AI bubble is being built on a lot of less than stellar private credit deals and when AI slows down/pops/whatever the real damage will be done in the private credit markets once everyone starts to actually investigate the debt they hold (like the First Brands bankruptcy).
Anecdotally I'm at a company where leadership and is pushing every department to "do AI." As a guy who builds data/ML models, and whose projects could be considered AI if you look at them from the right angle (but do not use LLM technology at all), projects that were tough sells three years ago are getting unlimited funding as "AI initiatives." It's fun but it does not feel sustainable.
This is happening at universities too... The FOMO is incredibly strong and there is fierce competition within to be the department / college / institute / center "where all the AI money goes". Being able to deploy AI technobable confidently is one of the more useful skills these days.
For the sake of argument, I'll momentarily grant you the premise that companies are seeing productivity growth. I'd add another slash-item here--they come to appreciate the enormous technical debt they've accumulated because nobody actually knows how any of their shit works anymore.
I know tech workers love to copium on this, but that ship has already sailed.
There have been meaningful, yet not overwhelming, productivity gains in many ways. No one is going to want to give them up just because we haven’t experienced the singularity.
If you rewind 5 years and said “I’ve discovered something that could increase productivity by 10%”, how much would that technology be worth? What about 5%? Those would be absolutely massive businesses.
So we might see a dotcom style bust, but no one is putting the AI genie as a whole back in the bottle. Last I checked, internet companies have done pretty well since 2000.
But think of all the capacity you can easily add once you get rid of the meatspace low hanging fruit? You just have to pay a company a little more instead of going out and hiring squishies.
“Not only is the Orange Man bad, but he’s also bad at his job and his effort to prop up the economy [by favoring AI]"
This could be correct but I wish more (all!) of our Big Tent dwellers were more fired up about the bad economic polices about which there is less uncertainty:
big deficits are bad
restricting imports is bad
Failing to attract immigration by high skilled and high potential people is bad
The thing I’m confused by is why tariffs really impacting people or crashing trade doesn’t seem to really be happening. Krugman in particular seemed to be ringing five alarm bells but it’s been mostly meh so far. It’s so meh that it’s way down the strategic list of what Democrats would ask for in the shutdown negotiations, people don’t seem to be caring much.
My understanding, a layman's not a professional's, is that the tariffs have settled mostly at around 10%, and they're being split three ways. The producing country entities are eating about a third of that, the importers are eating a third, and the rest is being passed on to consumers.
So essentially what you have is a 3% increase in sales tax on imports, which are only about 15% of the economy, but not domestic goods. That will slow the economy as any tax increase would. So the effects are real, but absent other shocks that's a drag not a crash. It is just going to keep getting worse.
The alarm bells were when rates looked like they'd be higher and with commensurate higher retaliation, and also not taking into account the AI spending and various corruption loop holes being carved out.
The Peterson Institute has a nice tracker on tariff revenues, though it's only updated through July so far. Since April, tariff revenue is about 6% of total import value. It varies a lot by type of import; for example, tariff revenue is over 13% of total import value for consumer goods so obviously consumers will most directly feel the impact of that, depending on how much retailers pass on those costs.
That's interesting and makes sense. It will be interesting to see how this plays out politically. Trump et. al. are going to point out that Democrats predicted tariffs would crater the economy and were wrong. Democrats will respond with what exactly? That he said he was going to raise tariffs 400% but only raised them 10%? Or that economy growth is very weak or non-existent and absent the tariffs it would be better? I don't know how well that sells especially since Republicans will do a lot of lying both about the economy and tariffs.
This is exactly right, there’s a great FT piece showing that the effective rate of tariffs so far has been about 1/3 of the nominal proposed rates, because the admin has declared so many modifications and exemptions.
I think that's sort of the implication of this article. Absent the huge AI investment boom, the effect of the tariffs would be more noticeable in stuff like GDP and stock prices.
And it's actually not true that people haven't noticed. Consumer sentiment has plummeted. https://www.sca.isr.umich.edu/. Grocery prices keep going up rapidly. https://www.wsj.com/economy/consumers/grocery-price-inflation-customer-reactions. And it's showing up on Trump's numbers regarding the economy, with polling showing a little over half disapprove of his handling of the economy. This doesn't sound huge, but in the first term, his numbers were much stronger on the economy and is what really held up his overall approval just above water. So people are noticing.
The difference is the press is giving about 5% of the coverage to rising prices as they did to 2021. Part of this is because inflation really was rising more rapidly and this was a new phenomenon since basically 2008*. But most of this is is press incentives. I said before many times, but I think we actually underestimate how much right wing media speaking in one voice gives them a lot of power as to what the "news of the day" will be. And I suspect it's part of what's going on here.
* I think we underestimate how long 13 years is. Because basically for almost 13 years from 2008 to 2021 we had what amounted to a ZIRP environment and little to no chance of inflation (in fact for periods there was more chance of deflation). I mean we had true ZIRP from 2020-2021, but really interest rates were functionally nothing post 2008 crash. Think we underestimate how many people took cheap borrowing and no chance of inflation as a new normal.
Maybe the price issues will get more salient the closer we get to an actual election. Or if there are protests or something about the economy. Wonder if anti-trust Democrats will change their tune about greedflation or whether it will be business as usual.
Jason Furman said tariffs would create about a .6 percent drag on GDP growth. The reason for the hair on fire reaction from economists had more to do with how it was definitely a stupid idea than the idea that it would turn us into Italy in a year.
It's also confusing to explain because even though tariffs do raise prices, they aren't really inflationary since they are taxes.
They are inflationary in a literal sense, there just happens to be a counterbalancing force inherent to tariffs (negative demand shock) that partially offsets that effect.
There's bad for the economy over the long run, where you fail to have the growth you should have had, and there's bad for the economy now where people notice and feel the pain.
The things you list fall mostly in the former category. Virtually any return to normalcy will quickly reverse three out of four, so I don't know how being fired up about them you need to be since they're pretty universally considered dumb shit to do, and that's the reason no one has tried hard before to do them. The deficit is its own issue.
It's funny how the deficit seems inevitable but if you take out all the terrible Republican policies since the Bush era - Iraq and tax cuts, mostly - then there wouldn't have been a deficit at all. It's also likely that Clinton would have been well on her way to balancing the budget, continuing Obama's approach***, before Covid necessitated running huge deficits, which any executive would've needed to do, but it wouldn't have happened on top of an already significant deficit runup over the previous three years.
It would probably be beneficial for Democrats to talk loudly, constantly, about how they reduce deficits, which is something they kind of whisper, or don't bring up much, because they are afraid it would make their always-cranky left flank cranky. Part of that is the legacy of the GFC when the government was legitimately overreacting to deficits, and it was mostly the left calling it out, but it's a different time now in many, many ways. The party can't stop ignoring those voices soon enough... this is why I kind of like the idea of Michael Bennett in 2028, I really think he gives no fucks about the left, but isn't hostile to their policy demands, just very pragmatic, possibly to a fault.
Yes, and: This is why, beyond what MY is saying, anyone on the left wishing for the "recession delivers Dems from Orange Man" narrative is bat[dookey] crazy. We are in a horrible place fiscally to be heading into a recession. Most state and large local governments are facing fiscal pressures they don't know how to solve for and needing to cut spending or increase taxes to balance budgets; the OBBA has massively increased the federal deficit. All this means the kind of fiscal stimulus response typically needed in response to recessions will be harder to do (if not impossible) and less likely to work. Even if a potential recession delivers wins to Democrats, then they're walking into the boobie prize of being in charge of things while people are experiencing the full pain of the resulting downturn with less ability to deal with it. And since all the Dems' big economic and domestic ideas over the past decade have involved spending that won't be possible without massive revenue increases that go beyond the parameters they've been willing to agree are acceptable, they won't be able to do anything they've been promising without completely new ideas and are totally ill-equipped for a climate of fiscal constraint. No one should forget that many of the roots of what we are seeing now in Trump and the larger political dynamics around younger voters and other groups moving towards the right date back to the fallout of the great recession. Democrats desperately need smart ideas for navigating the real economic challenges the country faces (including whatever happens with AI) and don't seem to have them.
I'm quite worried about a bubble. Maybe ignorance is driving my worry, but I'm still failing to see how LLMs are going to be so transformative so quickly. They won't be a complete non-factor, of course, but I'm still skeptical of some of the use cases claimed.
And I'm not even thinking about the politics here. But since this site does cover it, the motivated reasoning can cut both ways: Matt very much wants a dominant Democratic Party that can win elections on a regular basis akin to the New Deal era. But he is correct that there could be plenty of monkey's paw energy here: it's easy enough to concoct a scenario where a Democrat narrowly wins the presidency in 2028, but the bubble doesn't burst until during that Democrat's presidency, and it turns into a poisoned chalice election that instead creates an enduring *Republican* majority in the 2030s and potentially beyond.
>I'm still skeptical of some of the use cases claimed.
That's because at least some of the use cases claimed won't work out. That's normal and expected for any new tech platform with many possible applications. Very few startups (I use this term here based on age and stage, not size), even very successful ones, actually do anywhere near everything they claim they will on the timelines they project publicly.
A bubble is a stronger claim - it means too many of them won't work out, so that the companies can never grow revenue enough to become solvent once the investment dries up.
Many bubbles are’t predicated upon investors snapping up inherently worthless assets (beanie babies) but rather overly aggressive speculation in genuinely promising technologies. During the lead up to the panic of 1873, there was massive speculation in railroads, such that many were built to go literally nowhere. Similarly, in the lead up to the dot-com bubble, telecom companies built out a massive network of high speed internet cables that ended up being useless.
Both railroads and the internet obviously transformed the world. And, in the lead up to both crashes, there were some companies that were turning a profit. But the scale of investment outpaced the ability of most companies to service debt in real time.
It’s hard not to see the similarities to AI today.
For sure. Which kind of strengthens the point that you can have a fundamentally promising new technology with an ultimately justified infrastructure buildout and STILL have a bubble
If you have an awesome new technology with a huge likelihood of being transformative almost by definition you're going to get a bubble. Which of course doesn't mean that the technology will prove to be a failure; typically just the opposite, with an unfortunate recession or panic briefly interrupting things.
The biggest reason to think that this is happening isn’t AI entering bubble territory (although FWIW, capability growth clearly leveled off this year and the OpenAI spend that a lot of CoreWeave and Oracle’s projected data center buildout are financed against probably won’t happen unless they can get that going again very quickly; as with dot coms in the late 1990s, the underlying tech is economically valuable but there’s a good chance that related capex is overshooting), but much more boring movements in “normal” macroeconomic indicators:
1: ISM manufacturing PMIs have been in contraction for most of the year
2: Services PMI stalled to flat territory in the last print
3: Activity indicators in trucking and logistics are way down
4: Quit rates in construction (a typical leading indicator) dropped to levels not seen since the GFC
5: Retail and manufacturing companies started seeing significant margin contraction even last quarter, and FIFO inventory accounting effects are likely to make this worse over time
6: Employment in the most cyclical private sector categories is in contraction
7: We’re seeing significant credit default problems in cyclical industries (Tricolor and First Brands bankruptcies).
8: Employment growth in general has rolled over, and although we don’t have the latest NFP print due to the shutdown, ADP’s private sector payroll growth estimate is in the red two months in a row (after revisions).
And all of this is happening in a stagflationary policy-engineered supply shock (tariffs, shift to immigration restrictionism, permitting regime that’s increasingly and irrationally hostile to low-LCOE wond and solar power deployment)— and because of that, inflation has been accelerating at the same time that employment is rolling over, making it much harder to do a monetary rescue of the economy without causing really high inflation.
I'm not sure about capability growth leveling off. Those METR task-coherence curves are continuing to be straight lines, and IMO gold wasn't predicated by most markets until next year. Per Anthropic and OpenAI insiders they already use models to do most of their coding.
Robotics is super obviously still in the "picking low hanging fruit" stage, Sora / VEO also seem like they're in "straight line go up" world-model coherence.
I’d recommend looking more closely at the METR capability evaluation stuff. Even at the 50% reliability level, GPT-5 was a 50% improvement over o3 rather than o3’s more than doubling of its precursors’ times, and at the 80% reliability threshold, coherence length growth was a much more modest 20%-ish. And I understand that even the METR evaluations show some degree of Goodharting— real-world performance is less impressive. (Similar issue with the IMO problems and other benchmark scores.) And, well, although the exact figures are hard to come by, GPT-5 was likely at least twice as expensive to train as o3 (and possibly five to ten times as expensive) even though some architectural improvements likely made each unit of compute involved cheaper.
I’ve also gotten a lot more careful about how much I credit OpenAI insiders’ claims about what they’re cooking— GPT-5 was much less exciting than like, roon’s posts led me to think it would be. These guys are very smart and they’ve done some impressive work, but they’re also talking their books.
(FWIW, I think that AI is going to be a very economically important technology, and that we likely eventually see further scaling economies with more technological breakthroughs. I definitely think that AGI is possible and am concerned about its potential consequences. But OpenAI’s current business model, cash burn rate, and capex rate require them to succeed very quickly and smoothly.)
To bring it back to AI, did increased capex (and Trump's moves to boost asset prices) fill a demand shortfall? Or did these moves just raise asset prices while increasing exposure to AI across the financial system?
That’s a good question. I’m not 100% sure, but I think that it helped maintain foreign equity investment inflows into the US (hedged with short positions in the dollar).
This all suggests a slowing economy. Would that also suggest a recession? Don't recessions tend to have an identifiable cause? (E.g., the housing bubble popping, the Volcker interest rate hikes).
The president enacting the biggest tariff increases since Hawley-Smoot and driving net immigration down to 2020 pandemic-like levels. And then on top of that, we might get a credit crisis (driven by defaults on high-yield loans to subprime auto borrowers, underwater commercial real estate holders, and small and medium retail and manufacturing businesses with tariff exposure— were already seeing this with the TriColor and First Brands bankruptcies, with regional banks like Western Alliance and Zion’s also reporting issues with their debt today), a bubble burst in crypto (enormous liquidations during a relatively smaller drawdown last week revealed just how leverage-dependent that whole expansion has been), and then maybe also the AI bubble popping (with consequences ranging from total wipeout to painful but survivable multiple compression for the major firms involved).
Even Krugman has long said that tariffs are highly unlikely to cause a recession.
Are subprime auto borrowers that big a thing, like the subprime housing was in 2007? Seems highly unlikely.
I could see a crypto bubble popping causing a financial crisis if the big financial entities get highly leveraged in crypto but we're far from that. It's still limited to the sharks and the marks.
Yes, the AI bubble popping could propel us into a recession.
I mean, the leading indicators have been screaming “contraction” since the initial tariff announcement in March and employment growth has collapsed, which is very strong evidence that the economy is in fact slowing down.
Re: credit bubbles— small and medium enterprise lending, commercial real estate, and auto loans collectively amount to double-digit trillions in total loan value in the US alone. A significant increase in defaults could easily blow up a lot of different financial institutions, and if lenders get scared and tighten underwriting, these demand impact from credit drying up could get pretty severe.
Matt is wrong in a very important way about this as a way that the investment thesis could be mistaken:
“Increasingly capable models might do something harmful that destroys rather than creates economic value. The Entity from the “Mission Impossible” movies was surely bad for stock prices, despite its impressive technical abilities”
No. This doesn’t serve as a counterargument to investment for the same reason that it doesn’t make sense to buy end-of-the-world insurance: in the case that you’re right, you’ll never collect. Investors who think it’s 90% likely that ASI kills us all and 10% likely it doesn’t but remains the most important technology in all of history would be acting monetarily rationally to be long AI because there’s no way to short the human race.
A less capable AI might cause people to stop investing in AI without destroying the world. And the other 3 scenarios Matt gave are valid. More prosaically, we might find that AI has a hard time turning a profit for mundane reasons, for example because bottlenecks shift to other places, or AI companies or the businesses that utilize them have a hard time capturing the value they generate.
I feel like the fact that you have to be quite old to remember normal recessions makes people a little to blasé.
I was in the labor market during the dot com bubble bursting because I went into work out of high school for years. It was nothing like the Great Recession or Covid slump. Obviously you would rather not have a recession than have one but many Presidents have survived recessions. This isn’t to carry water for this administration but just to point out a recession isn’t a get out jail free card.
The nature of recessions has changed, they used to be a healthy flushing of crap from the business world but we have been abnormally propping up the economy with tax cuts, crazy low interest rates, and stimulus spending since the Bush admin. While the GFC and Covid were difficult periods we haven't resolved the underlying thesis of grandiose spending to solve everything. Just look at how the stock market was hungrily circlinging a 1/2 pt lower interest rate recently, anticipating a return to the 'good ol days' of cheap money.
Yes, if they get them over with sufficiently early and enjoy a strong recovery (ie, Reagan, FDR 2nd term, Eisenhower, JFK/LBJ, Nixon, etc). Otherwise, (eg Carter, Trump 1), not so much. Timing is the key. Democrats pining for a recession had best not hope we're already sliding into one, because that would make for uncomfortably high odds that the Republican nominee in 2028 is enjoying a strong expansion: the average recession since WW2 has lasted about ten months, and even the two longest ones were over after eighteen months.
>Trump is also actively trying to murder the economy<
Zero doubt of that.
I'm just saying that, for purposes of a restoration of sanity and democracy, it would be better if the patient died later rather than sooner. An *early* recession is Josh Shapiro's (or Gavin Newsom's or Gretchen Whitmer's or AOC's or JB Pritzer's) worst nightmare, because a recovery starting some time in the first half of 2027 (or earlier) makes Vance (and yes, I do believe it'll be Vance) all the harder to beat.
I believe if the administration's economic vandalism doesn't really begin to catch up with them in a major way until, say, the latter part of 2026, the eventual Democratic ticket stands a very good chance of being in a strong position for 2028 (because in that scenario, even a normal recession probably means a recovery doesn't get under way until almost the end of 2027). I mean, there are a number of things about the current administration normies are uncomfortable with: add surging joblessness to the mix (assuming we still have fair, competitive elections), and the Democratic nominee should be favored to win.
I almost wonder sometimes if various players in that administration secretly *want* to engineer a recession as soon as possible, for their own purposes.
And yes, as a matter of fact I do think economic fundamentals still matter a great deal: 2024 reinforced this in my mind (not just in the US but in many countries).
Just chiming in to agree. Being old enough to have seen this dance through a few decades, the electorate is most sensitive to the rate of change---going into an election recovering from a terrible recession is better for incumbents than entering a mild recession from an historically strong economy.
Though I tend to think that this administration is best understood as thinking zero steps ahead and living entirely in the present; front-loading all the trauma may have the effect of setting them up to run on a "things are getting better" (because we cratered the economy last year) but I feel that is the sort of thing they would figure out in the moment and then come up with a pithy slogan for it.
to try to inflate the economy if they think voters are souring due to macroeconomics. This is what's behind the attack on Powell and the fed - the admin wants to be able to turn QE on like a tap if they need to.
I mean sure. But that mostly fits my broader point. Obama won reelection. It wasn't great but it didn't function to make the opposition’s job easy.
The only official recessions most millenials and Gen Z have known were catastrophic black swan events. Which really did substantially shape the 08 and 20 elections but it’s not conclusive and if you extend it to include rough patches it’s even less conclusive.
I find it funny how Trump sells tariffs as a great growth strategy and then proceeds to give tariffs exemptions to the crown jewel of American economic growth.
This makes more sense when you realize that tariffs are just a tool for corruption and power consolidation for a wannabe autocrat, but it is funny.
I don't know if it's a bubble or not either, but the fact that Sam Altman is bragging that the next innovation for chatGPT is dirty talk leaves me with some doubt (read to the end).
"We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).
In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults."
It has always bee thus. In the fiber/networking boom of 2001-2005, I was at trade shows where speakers calculated that 30-50% of bandwidth was going to Napster (music-sharing) and porn.
Nowadays more is dominated by bots than makes sense.
Edit: I should add, that these bots are all too often being used to inflate other online markets. Either through fake engagement, or by trying to make a better search tool out of AI.
Not sure it's that porn pushed forward internet technology per se, but I think it plays a key role innovations behind internet commerce and basically how to make money of the internet.
I think that used to be true but is it still? MindGeek (or whatever they changed their name to) surely makes a ton of money, but they aren't driving the economy.
Aren't there many technologies that reach their takeoff point (both in terms of capability and revenue) when they figure out how to apply them to the porn market?
AI and porn are such a marriage made in heaven that I'm surprised it took this long.
I hope this is a sign that they're genuinely solving some weak type of alignment and not a way of staying in the news at the end of a relatively disappointing year
I just tried to use ChatGPT for an hour last night to do a simple task - taking names from a list of publications in a Word file and putting them in a spreadsheet (a little more complicated but not much). It failed so miserably that I started cursing at it, at which point it refused to engage anymore. 😆
AI could be universally better than people at everything and humans would still have valuable work. Bangladesh makes clothes even though the US can make them better, and Walmart hires elderly greeters even though celebrities could greet people better.
I'm not sure Matt's wrong, but I will say anecdotally that some pretty right-wing people I know are also concerned about the possibility of an AI bubble and looking to move their money elsewhere.
Yeah I think there’s rational reasons to want to take some profits and diversify. Into what I don’t know exactly. Everything seems to have boomed in the last few years except maybe healthcare stocks (?).
I’m genuinely confused and hoping someone who knows more than me can explain.
It seems like there’s strong evidence that scaling has broken down. AI developers appear to have exhausted the supply of high-quality training material around 2023, when ChatGPT-4 came out. The compute used to train GPT-5 was reportedly 20–50× greater, yet the result is only moderately better.
So why are sophisticated companies still pouring billions into new compute infrastructure if scaling returns are collapsing and the current models aren’t yet capable of doing much real work?
My naïve play would be nesting LLMs within other programs that reason more precisely. I suppose AI coding is improving quickly enough that possibly all this new compute could be used for iterative self-improvement.
The potential revenue from being the first to reach AGI are theoretically so monstrous that it's rational for these companies to keep investing in it, even for a 1% chance.
I am interested in anyone explaining how close we are to AGI. Lay out a model for what it will take for AGI and how close are we? How many 'ideas' need to be represented to achieve AGI? How many 'ideas' are currently represented within today's LLMs? That is the type of discussion that would allow me to reason if the AGI argument is real or just marketing fluff.
AGI is where AI can improve itself faster than humans can; we aren’t there yet but it is possible we could get there some day. It’s just hard to imagine having an AGI that is also effectively enslaved by a particular AI company and used to generate revenue for it.
I truly don't understand why they think they'll necessarily control it; these guys are building it because they watched movies and read books. We know what happened in that media.
That is moving the goalposts on defining AGI. But even using your definition. What is the mechanism that it uses to improve itself? What does it need to be able to be at that point? What is a model for how close we are to that?
My point would be that the current LLMs don't represent any data in a way that it could self improve and that they are not storing data in any real way that could get to AGI. The use of human intelligence terms like reasoning when talking about LLMs means that people don't have a clear idea about their current limitations, limitations in future growth rates as well as limitations in absolute capabilities.
> What is the mechanism that it uses to improve itself
It has the source code and settings and training data that was used to built itself. It looks at it and creates another revision and evaluates whether it's better or worse. Provided inputs of energy this could happen without human intervention. If it's better that new one takes over.
(I'm ignoring if the AI would ignore the alignment problem of its own AI creation having value differences with it.)
That posits capabilities that the LLMs don't possess. Just because all these companies and researchers are using human terms doesn't mean they are actually doing it. While the finely crafted mirrors that are the current LLMs seem to do human things, they have no reasoning.
I think you and everybody else would like to know that. Even the people working on this stuff have very different ideas about the timeline. I don't trust the Altmans of the world to tell the truth. But even the really smart people who are immersed in the development don't agree.
The returns on capabilities to scaling are steady, holding to stable patterns. The patterns are logarithmic, so linear returns to exponential inputs is exactly what they predict. And I assume you're comparing GPT-5 to 4 for that multiple? Because GPT-5 used less pre-training compute (I'm still not sure why they switched from calling it 'training' to 'pre-training') than 4.5, because that's no longer the most efficient way available to improve performance.
The training data problem exists but the companies knew about it before 2023 and have been following a series of other paths forward. Optimizing data. Post-training methods like SFT, RLHF, DPO. Chain of thought. Reasoning models. Better system prompts. Tool use, agent harnesses, and other scaffolding to help the AI actually use its capabilities (aka you don't expect a human to solve every problem in their head based on a single instruction on their first day at work).
There's also distillation, which is when you train a powerful model that's too compute-expensive to use for everything, and use it to train a smaller model optimized to have higher concentrations of desired capabilities compared to if you just trained the smaller model directly. Like how undergrads are better at solving problems after a professor teaches them, but you wouldn't hire a professor to solve every problem.
One thing you might be missing is, do we need more than that? Where are the capability thresholds that matter for profitable adoption? The metaphor I usually use is that 3-4 years ago the best models were like elementary school students. Now they're bright undergrads or grad students. How many more years of that form of scaling do we even need for big practical impact? There's still a lot of capabilities spikiness and other problems to figure out, but those are different directions of improvement than the kind of scaling you're talking about.
Also, I'm not sure what your use cases are, but I disagree that GPT-5 is only a small advance over GPT-4 in terms of practical impact. And if you haven't tried it, my current preferred model for most things is Sonnet 4.5. Also, most people use LLMs really poorly in my experience. The difference between the results you get from an average prompt with no context and what you get from a well-crafted prompt with sufficient context (aka the kind you'd give a human if you expected success) is enormous.
I spend alot of time chatting with Chat GPT, I generally prefer it’s personality to humans. I certainly prompt it in different ways.
If I give it an interesting seed, it can smooth out the tone abd make my writing better, look up examples, proof read, polish the tone, etc. However, it rarely has good ideas, it offers good phrases that a human brain can manipulate to improve his writing.
That's true. They've hardly surpassed us at that (yet?).
IMO, humans also rarely have good ideas. We give them a lot more time and shots-on-goal before they come back and tell us their ideas. and there are a lot more humans running around who might sometimes have them.
When people say chet gpt can’t write a legal brief because it sometimes makes reasoning errors or hallucinates, it seems relevant that 63% is a passing score on the bar exam. Most humans, even law school graduates, are not great at legal reasoning.
LLMs are already better at writing than the median high school senior, they could probably get a B- in many college classes with light prompting.
My only point is LLMs still can’t produce a better first draft than I would given reasonable time. This is still true for a material number of human writers, but the number will fall every year.
AI is already a very good human editor, maybe not New Yorker level, but good enough for every day stuff.
This makes sense. I'm not a particularly good writer myself, outside specific kinds of professional writing. I find AI writing with a well-structured prompt to be passable for many purposes, but definitely not all.
I'm curious what kinds of prompts you've used to generate first drafts? Or if you've tried any more complex setups like AI agent teams, where one LLM supervises and edits the work of another, or multiple LLMs work on different tasks and sections and then another combines the pieces and another edits it.
I do find, "Wait, does this rule out (most) humans?" to be a useful thought experiment when interpreting any argument of the form "AI can't do X."
Legally, AI is best when you spot the issue and tell it what to argue. It’s absolutely excellent if you get surprised in court but know enough to spot strong arguments and just need authorities.
I’ve found that when I give it a paragraph or two of facts and ask for “the best argument that Y, supported by citations to the official code of georgia and georgia appellate decisions” it generates excellent, transparent first drafts. You absolutely need to cite check them, but a partner really should cite check an associate written brief. If well prompted it is as useful as a second or third year associate, but you have to know a fair amount to spot the issue on your own. It’s issue spotting is rather dubious.
GPT 5 has, ironically, degraded user skill at using AI at my company. It’s much better at handling bad prompts, so people have decided (wrongly) that prompt and context engineering is no longer valuable.
I've noticed this, too. Earlier this year I helped build a training session series on LLMs for my coworkers. The overall effect was a 20% improvement in efficiency on certain types of tasks. But, that was 0% for some people, 80% for others. User skill still makes a *big* difference.
Although, even before this, very few people put much effort into how to build good prompts, not even using very basic strategies like "ask the LLM to optimize the prompt for you before answering it" or "explain why you're asking and what kind of answer you'd like."
When OpenAI released Deep Research and people were trying to figure out how to use it well, this person (https://x.com/buccocapital/status/1890745551995424987?lang=en) used O1 Pro to create an optimized prompt to get Deep Research to do deep research on Deep Research prompting strategy. He gave the resulting report back to O1 Pro and it used it to generate an optimized prompt template for future Deep Research prompt optimization. Promptception matters; the models understand what kinds of approaches will tend to yield what kinds of results.
Yes... but not in 2025. RSI could do that, but that's from algorithmic progress, not (just) compute scaling.
Also it's good to define what "exponential increase in capabilities" means more precisely. "Double task horizon time every 4-7 months" seems pretty exponential. Inference costs have been falling by 1-2 OOMs/yr for a given performance level, which is also exponential - so far we just care more about better performance than lower cost.
From what I've heard a large majority of code at Anthropic and OpenAI is AI-written, presumably at least sometimes with not-yet-publicly-released models and more compute. The humans are still providing the direction and evaluation. That changes where the bottlenecks are, but I'm not sure how much of a speed-up it actually is in algorithmic progress yet.
Also, my understanding of predictions like "AGI in 2027" is that the people saying this see it as their modal year - aka the year in which it could happen if each individual thing goes the most likely way (aka 'right'). Those same people are often on the record saying that they understand that some of those things will not go right, even if they don't know which ones, and so their median years for AGI are substantially further out, sometimes by as much as 5+ years, with significant probability on it being 15+ years later (see https://ai-2027.com/research/timelines-forecast).
If you assume that the way forward is "AGI" and the way to get there is "train bigger models" then the current strategies make no sense. One of the big breakthroughs that Deepseek implemented well was the "mixture of experts" which rolls a bunch of small models into a bigger "LLM". These smaller models are trained on very specific tasks and act as routers to direct inputs to the best sub-model(s) for the task. This is a highly efficient approach because you don't have to load a giant model into memory and run inference on the whole thing for every input token; and you don't have to train a giant model to do everything-all-at-once. You do, however, need tons of compute to fine-tune a bunch of small models, test it and run inference at massive scales.
AI coding is a good example of how breaking the problem up into smaller parts can be very effective. You can have an "LLM" that comprises smaller models that each good at a different aspect of a different language. Another can be good at calling certain tools like git or sed or searching the internet or whatever. The marginal return of training bigger and bigger models on all the programming languages at once gives way to training super-specialists that just know how to write networking code in Rust or fastapi in Python, etc.
Eventually if you glue enough of this stuff together in clever ways you get something that feels much more capable and intelligent because when you ask it a question it is very good at finding just the right sub-model that is also very good at what it does. You're still burning tons of compute on inference, but it's spread out (physically, in a data center, conceptually in MoE, etc.).
There is a ton more data to be had. As Geoffrey Hinton said recent models have figured out the world from just text which is very compute intensive. The next step is to have them explore the physical world.
True, although I have read they're now also using audio and video data in pre-training, not just post-training, for multimodal models. Robotics can offer totally different types of info and feedback (haptic? better causal reasoning? planning?). It's amazing how well models have grown to understand physics (as shown by video generation models' progress) without that.
Doesn't necessarily matter if the world or causation is illusory - the illusion, at least, still exists and we can interact with it, therefore it can be understood.
I think you miss the point of Waymo's system design and their use of AI. They have constrained the problem that the AI portion has to do to very narrow problems. And thus they can work issues in a clear way with discrete safety systems as a backup. Thus for the driving, they have applied the Floridi Conjecture to narrow the scope to increase its certainty.
Define very narrow problems. The Floridi Conjecture just means the AI operates like a human driven car with pedestrian detection, automatic emergency braking etc.
It's very impressive as the AI can identity what it "sees" using video and LIDAR but also predict the behavior of that person or object. When you're in the back you see it hit a ton of edge cases where it is able to deal with people and vehicles doing weird things as it "knows" what things are and what they are going to do.
Ok, one more comment. The issue is that you have to be able to explain what you want to a "brainless robot" (Judea Pearl's term). We understand very well how to explain to a brainless robot what it means to do a good job at predicting an outcome (in the LLM case, the next word) given some covariates. It can take those instructions and work really hard at learning how to do that (searching through all possible prediction functions in a certain space, seeing how they do on the data data).
For reasoning, we barely understand how to explain it to other humans. And we certainly haven't formalized it well enough to be able to explain it to a brainless robot.
(Caveat: I have vaguely heard about attempts to formalize reasoning for computers so it's possible someone claims that I'm wrong here. However, there is a difference between a couple of groups attempting to do something and having some limited success and a decades-long proven research project that has been undertaken by thousands of individuals, which is what statistical machine learning is.)
What you're missing is that we have a strong evidence base, going back decades, that shows that more data + flexible prediction algorithms lead to more precise predictions. We have no comparable evidence base around reasoning, and so nobody has any particularly good reason to believe that that would work.
Do we have any data relevant to LLMs before chat GPT 2? It seems safe to believe compute will continue increasing for a while, but what basis do we have for predicting what adding compute to a world historically unprecedented baseline will do?
My read is that the data is going to be the binding constraint. It's very clear from the statistical machine learning literature that increased model capacity requires more data to train. I don't think that there's any reason to believe that trying to use something more flexible/complicated to train on the same data set will lead to improved performance.
Of course, I'm just a statistician, not an AI researcher, so what do I know haha
As an AI researcher, I'm on team 2 - there is no moat. I've spent my entire career about 4 years behind the bleeding edge. What used to take a million of dollars in hardware and data labeling costs and world-leading research minds now only takes $50,000 for hardware/data costs and any old software engineer who got a 780 on their math SAT.
Both of those, though I'd say public research is more important, and is getting rarer and rarer. I'm glad Meta is still committed to open source.
Also Moore's law. V100 GPUs have gone from a hot commodity to leftovers that no one wants to use anymore. But the cost version of Moore's law is way slower than the transistor version, maybe cutting in half every 4 years or so.
The AI boom is more financial than physical. The ten largest American AI companies together are worth about $21 trillion, yet the actual wages paid to U.S. workers building AI-grade data centers and cooling systems amount to only $5–10 billion a year. Even the cash leaving the country for chips—mostly high-end GPUs fabricated in Taiwan—is only $30–40 billion annually. By contrast, at the height of Victorian Railway Mania in 1846, roughly five percent of all British wages were paid to engineers, navvies, and the workers supplying steel, timber, and stone for railways. That was in an economy only just escaping the Malthusian trap, with little surplus labor to spare. The railway boom left high-quality mainlines that remain in service to this day, an accomplishment that survived an eighty-percent crash in railway stocks. The infrastructure being built today may have even greater effects even if it’s relatively cheap to build.
The timber for RR ties and trestles is akin to the poured concrete floors in the shiny new LLM data centers. Taiwanese graphics chips are the locomotives.
This is a productivity bubble. There will be a lot of innovation and change, but well work out who the winners and losers are pretty quickly. The danger for the market is that there will be secondary losers: the companies building out infrastructure for this that could be left holding the can.
As for AI, it's slop. It needs to get way better. I honestly don't know if it will be but in its current form it's unusable for business.
During 2024 the Discourse repeatedly centered on the vibecession, despite the fact that while the American economy of 2024 had persistent higher than target CPI, it also had genuinely widespread growth across sectors and wage gains across income brackets. If anything, wage gains were strongest in the lower brackets, as income inequality was declining.
Those who defended the idea that a recession was happening despite the indicators was that the indicators were bullshit and people were suffering in real life.
I can't help but think that line of reasoning is much more appropriate now than it was then given how concentrated economic growth is in a narrow sector. Not only that, labor and wage indicators are genuinely flat/hinting at trending in the wrong direction. It's clear to me that the AI boom is propping up the top line while the rest of the economy is slowing, as you call out here.
It's just maddening how Trump has this Defying Gravity (my five year old is on a Wicked kick) quality to him, where nothing quite bad enough to penetrate the daily experience of the median American seems to happen while he's around. Pandemics are a weird case where both it's tough to blame someone for them AND as Lindyman argues, society memory holes the experience. I don't want mass suffering, but goddammit I want something to happen that is absolutely attributable to the mad king and everyone knows it.
Biden’s mistake was that he front-loaded all the economic goodies into the beginning of his term. 2021 was actually the best economy ever and people were disappointed when 2024 was worse. Would’ve been better to stimulate more evenly thoughout his term.
It would have been better not to stimulate at all. 2021 was a time for government to get the hell out of the way.
As someone who's unemployed and who's seen near constant layoffs and tons of people looking for work in my sector (advertising) and adjacent ones (tech, video production), I'm sure feeling the 'cession vibes and have been since 2024.
Maybe we need to move to the AI companies. I'm applying for some and I know folks doing some freelance work there. But if that is a bubble, then watch out.
Advertising and media have been getting hammered for some time now fornsure. Not sure they employ many people on a percentage of workers basis though.
Trump had to deal with covid. He probably would have won otherwise. His luck hasn’t been perfect.
Let me reeeeeeee david
He easily would have won without COVID. Also I don’t think Biden gets a pass on not being able to campaign if no COVID.
We had the inverted yield curve for a long time and now it's gone back to normal. Recessions usually start sometime after the yield curve flips back up. Gold is on a tear and underlying economic fundamentals are weakening. Also, if Matt didn't point this aspect out, some are not sure the A.I. build out can actually turn a profit. If A.I. rolls over there's not a lot of backup in the system to keep the economy from sinking.
Re: profit. It’s an interesting question. Fundamentally I do think it’s like the dot com bubble - there will be winners and losers and the winners will make a lot of money. OpenAI generates a lot of revenue already. Amazon also generated a lot of revenue early, many years before turning profitable. Obviously not always the case but I would bet that some very profitable AI companies will exist in 10 years.
Based on the historical example of Amazon it might be a GOOD sign if an AI company has high revenues but is "unprofitable" because it's reinvesting that revenue instead of paying dividends. There's not too many long-term thinkers around and they can really run (very slow) circles around more rash competitors if they're smart in their approach.
Really, though, how many cycles in the computer age have we gone through of "I'd like to see how they manage to make money with THIS"? Because every time they end up making an absolute crap ton of money. Search, video, social media... all wildly profitable. (News, well, that's a different discussion...)
Every time? How many billions did Facebook piss away on the "metaverse"? I'd like to watch a documentary about that on my 3D TV.
The idea isn't that every company doing something will make money, it's that one will eventually do it well enough to make money. The metaverse is a joke and basically Zuckerberg's Spruce Goose. But somebody is going to be counting their twelve figure fortune off VR/AR one day, you can set your watch to it.
(I don't think 3D TVs will ever really be a thing, largely because AR is already capable of doing what it does better.)
Sure, Jan.
What will their moat be in a field of multiple heavy hitters that’s soon to include China probably?
IMHO the moat between US and Chinese AI firms is far bigger than within the US and China:
You could make the argument that OpenAI is trying to be Facebook, productizing their models and building the thing that will be the next social media.
Anthropic is trying to be Microsoft, getting into enterprise software through AI coding tools and bespoke models.
Google is... trying to be Google and leveraging their scale to offer inexpensive AI across all of their products to keep them sticky.
Microsoft and Amazon seem interested in providing infrastructure.
Meta seems to be throwing money at the problem and hoping that they can grab the next trend (e.g., AR glasses) and use it to sell more ads.
Personally I think the DGX Spark is evidence that Nvidia sees Apple as a competitor in AI research---at the "I fine-tune models on my laptop for production use in a data center" level.
None of these are clear-cut---Google also does infrastructure, Microsoft is developing frontier models, etc.---and they are all competing with each other vigorously. But these are all examples of things China won't be able to compete on in the US. They are cranking out open weights transformer models and really impressive image/video generative models that absolutely compete at the "we need cheap AI solutions for some specific business problem". But no US company is going to run models on Chinese data centers for US customers or expose IP to Chinese coding agents; Americans aren't going to use a China-based social network; etc.
They are betting on compute being a moat, plus network effects (story of Googles success more or less). May not work, especially if hyper scaling stops being effective, but that’s the idea.
What does this look like? One Google-like winner that achieves an insurmountable lead in CAPEX and lock-in?
They certainly think so and indeed are betting on search being tentpole part of the profit engine in a very similar way. Hence the race.
I think Matt is pointing out that you cannot time it.
What if there is a recession now with a quick recovery, and business is booming again by 2028? Then your “just run on the bad economy” strategy will be fun.
What if AI props everything up and the recession is just middling? What if the recession doesn’t happen until 2029? What if it starts in 2028 but people aren’t really feeling it yet?
Maybe he’ll start an unwinnable war in Venezuela
The candidate for peace Donald Trump? Nahhh
Add in lack of major hurricanes to the US this season. Meaning there hasn't been a true stress test of the new "you're on your own" FEMA.
Is it possible the 2024 economy was better for Democrats and less good for Republicans? Ex. Covid seems to have resulted in people at the bottom of the wage distribution getting raises or new jobs with higher pay, meanwhile there was inflation hitting everyone. Or maybe the causality goes the other way and people who were hurting decided to vote for Trump…
The one way the AI boom goes bust that you didn't mention: companies realize that the productivity growth they're paying for (through staff reductions/efficiency gains/something else) doesn't approach the amount of money they're spending, and start balking. Right now a lot of the AI attitudes I'm seeing in business center around "we have to do this because our competitors are doing this and we can't be left behind." But you're starting to hear more public grumblings that the emperor has no clothes.
Noah Smith had a good column recently that AI doesn't have to completely flop to burst the current bubble, just underwhelming customers might be sufficient even if long term the top players settle into being massive, profitable, but fairly ordinary tech companies.
The railroad crash in the 1800's seems like a good model to keep in mind. The railroads absolutely were useful and a significant transformation of the economy, but enthusiasm got way out ahead of the returns and a lot of shady financing caused real economic pain for the economy. We will see.
And that phenomenon as has been mentioned often now had the benefit of leaving behind a very positive infrastructure legacy whereas the hardware build-out happening now has a lifespan of a few years (?).
Hey, the ruins of those data centers might be great sources of rare earths one day.
Hey - if AI leads to a bunch of power plants being built, we could have cheap power come from it!
I think one reason that railroads were so successful for the country is *because* people would over-invest and lose their shirts, but then a new group would show up with these relatively cheap railroad assets and then use them to do cool things. Similar with fiber rollouts in the early aughts.
Didn't this also happen with the internet? Startups going out of business laying undersea cables and stuff, later bought up by the very telecoms that everyone said was at risk of going out of business? I assume in the future AI will be an add on to our phone bills.
Also an industry, like AI, with v high capes but low marginal unit cost. Tough business model. Airlines another example and they are constantly going bankrupt.
The problem with a capex-heavy, low marginal unit cost business is that you need two skillsets to run that business well, and they are entirely opposite in terms of personality and skillset.
One side is about raising capital and managing a complex set of loans, bonds and equity finance, which calls for either brilliant financial engineering or charismatic investor relations (or, ideally, both).
The other side is about recruiting customers, pricing carefully (price discrimination, price competition, etc) and about a relentless focus on cost control.
So one of these is either an extreme nerd, or else someone who is super-charismatic to investors; the other is a penny-pincher on costs and a nickel-and-dimer of the customers.
Well that and in a competitive market pricing ends up getting pushed to the marginal cost of product. So the business is “spend a bunch of money to sell products at cost”!
You usually see either concentration into monopolies or lots of failing businesses, depending on whether the market stays competitive.
That economic logic is also why utilities are heavily regulated —government sets profitable but not monopolistic rates and (try) to address service quality and access through regs.
Airlines are interesting because they were all built under more of a utility model and then switched to competitive pricing. Rates came way down, and the airlines have all been failing or concentrating since (with the occasional low cost carrier emerging when the arbitrage opportunity is enough to justify the capex).
Being jaded and bored by today's AI models, which would have blown literally anybody's minds five years ago, before they're even really implemented in our economy or integrated into our society would be the most 21st-century-America thing as hell.
This is my read but I worked on some data center financing deals and I see the fallout hitting differently - I think AI losing its sizzle will cause big issues in the private credit market. My take is that the AI bubble is being built on a lot of less than stellar private credit deals and when AI slows down/pops/whatever the real damage will be done in the private credit markets once everyone starts to actually investigate the debt they hold (like the First Brands bankruptcy).
Anecdotally I'm at a company where leadership and is pushing every department to "do AI." As a guy who builds data/ML models, and whose projects could be considered AI if you look at them from the right angle (but do not use LLM technology at all), projects that were tough sells three years ago are getting unlimited funding as "AI initiatives." It's fun but it does not feel sustainable.
Similar experience for me at a F500. Everything is having AI shoehorned into it.
This is happening at universities too... The FOMO is incredibly strong and there is fierce competition within to be the department / college / institute / center "where all the AI money goes". Being able to deploy AI technobable confidently is one of the more useful skills these days.
For the sake of argument, I'll momentarily grant you the premise that companies are seeing productivity growth. I'd add another slash-item here--they come to appreciate the enormous technical debt they've accumulated because nobody actually knows how any of their shit works anymore.
If you're to believe "If anyone builds it, everyone dies" (GREAT book btw) then one can only hope the emperor truly has no clothes.
I know tech workers love to copium on this, but that ship has already sailed.
There have been meaningful, yet not overwhelming, productivity gains in many ways. No one is going to want to give them up just because we haven’t experienced the singularity.
If you rewind 5 years and said “I’ve discovered something that could increase productivity by 10%”, how much would that technology be worth? What about 5%? Those would be absolutely massive businesses.
So we might see a dotcom style bust, but no one is putting the AI genie as a whole back in the bottle. Last I checked, internet companies have done pretty well since 2000.
But think of all the capacity you can easily add once you get rid of the meatspace low hanging fruit? You just have to pay a company a little more instead of going out and hiring squishies.
“Not only is the Orange Man bad, but he’s also bad at his job and his effort to prop up the economy [by favoring AI]"
This could be correct but I wish more (all!) of our Big Tent dwellers were more fired up about the bad economic polices about which there is less uncertainty:
big deficits are bad
restricting imports is bad
Failing to attract immigration by high skilled and high potential people is bad
Deporting people is bad _for the economy_
The thing I’m confused by is why tariffs really impacting people or crashing trade doesn’t seem to really be happening. Krugman in particular seemed to be ringing five alarm bells but it’s been mostly meh so far. It’s so meh that it’s way down the strategic list of what Democrats would ask for in the shutdown negotiations, people don’t seem to be caring much.
My understanding, a layman's not a professional's, is that the tariffs have settled mostly at around 10%, and they're being split three ways. The producing country entities are eating about a third of that, the importers are eating a third, and the rest is being passed on to consumers.
So essentially what you have is a 3% increase in sales tax on imports, which are only about 15% of the economy, but not domestic goods. That will slow the economy as any tax increase would. So the effects are real, but absent other shocks that's a drag not a crash. It is just going to keep getting worse.
The alarm bells were when rates looked like they'd be higher and with commensurate higher retaliation, and also not taking into account the AI spending and various corruption loop holes being carved out.
The Peterson Institute has a nice tracker on tariff revenues, though it's only updated through July so far. Since April, tariff revenue is about 6% of total import value. It varies a lot by type of import; for example, tariff revenue is over 13% of total import value for consumer goods so obviously consumers will most directly feel the impact of that, depending on how much retailers pass on those costs.
https://www.piie.com/research/piie-charts/2025/trumps-tariff-revenue-tracker-how-much-us-collecting-which-imports-are (see the data link at the bottom of the page)
That's interesting and makes sense. It will be interesting to see how this plays out politically. Trump et. al. are going to point out that Democrats predicted tariffs would crater the economy and were wrong. Democrats will respond with what exactly? That he said he was going to raise tariffs 400% but only raised them 10%? Or that economy growth is very weak or non-existent and absent the tariffs it would be better? I don't know how well that sells especially since Republicans will do a lot of lying both about the economy and tariffs.
This is exactly right, there’s a great FT piece showing that the effective rate of tariffs so far has been about 1/3 of the nominal proposed rates, because the admin has declared so many modifications and exemptions.
I think that's sort of the implication of this article. Absent the huge AI investment boom, the effect of the tariffs would be more noticeable in stuff like GDP and stock prices.
And it's actually not true that people haven't noticed. Consumer sentiment has plummeted. https://www.sca.isr.umich.edu/. Grocery prices keep going up rapidly. https://www.wsj.com/economy/consumers/grocery-price-inflation-customer-reactions. And it's showing up on Trump's numbers regarding the economy, with polling showing a little over half disapprove of his handling of the economy. This doesn't sound huge, but in the first term, his numbers were much stronger on the economy and is what really held up his overall approval just above water. So people are noticing.
The difference is the press is giving about 5% of the coverage to rising prices as they did to 2021. Part of this is because inflation really was rising more rapidly and this was a new phenomenon since basically 2008*. But most of this is is press incentives. I said before many times, but I think we actually underestimate how much right wing media speaking in one voice gives them a lot of power as to what the "news of the day" will be. And I suspect it's part of what's going on here.
* I think we underestimate how long 13 years is. Because basically for almost 13 years from 2008 to 2021 we had what amounted to a ZIRP environment and little to no chance of inflation (in fact for periods there was more chance of deflation). I mean we had true ZIRP from 2020-2021, but really interest rates were functionally nothing post 2008 crash. Think we underestimate how many people took cheap borrowing and no chance of inflation as a new normal.
Maybe the price issues will get more salient the closer we get to an actual election. Or if there are protests or something about the economy. Wonder if anti-trust Democrats will change their tune about greedflation or whether it will be business as usual.
Jason Furman said tariffs would create about a .6 percent drag on GDP growth. The reason for the hair on fire reaction from economists had more to do with how it was definitely a stupid idea than the idea that it would turn us into Italy in a year.
It's also confusing to explain because even though tariffs do raise prices, they aren't really inflationary since they are taxes.
They are inflationary in a literal sense, there just happens to be a counterbalancing force inherent to tariffs (negative demand shock) that partially offsets that effect.
There's bad for the economy over the long run, where you fail to have the growth you should have had, and there's bad for the economy now where people notice and feel the pain.
The things you list fall mostly in the former category. Virtually any return to normalcy will quickly reverse three out of four, so I don't know how being fired up about them you need to be since they're pretty universally considered dumb shit to do, and that's the reason no one has tried hard before to do them. The deficit is its own issue.
It's funny how the deficit seems inevitable but if you take out all the terrible Republican policies since the Bush era - Iraq and tax cuts, mostly - then there wouldn't have been a deficit at all. It's also likely that Clinton would have been well on her way to balancing the budget, continuing Obama's approach***, before Covid necessitated running huge deficits, which any executive would've needed to do, but it wouldn't have happened on top of an already significant deficit runup over the previous three years.
It would probably be beneficial for Democrats to talk loudly, constantly, about how they reduce deficits, which is something they kind of whisper, or don't bring up much, because they are afraid it would make their always-cranky left flank cranky. Part of that is the legacy of the GFC when the government was legitimately overreacting to deficits, and it was mostly the left calling it out, but it's a different time now in many, many ways. The party can't stop ignoring those voices soon enough... this is why I kind of like the idea of Michael Bennett in 2028, I really think he gives no fucks about the left, but isn't hostile to their policy demands, just very pragmatic, possibly to a fault.
Yes, and: This is why, beyond what MY is saying, anyone on the left wishing for the "recession delivers Dems from Orange Man" narrative is bat[dookey] crazy. We are in a horrible place fiscally to be heading into a recession. Most state and large local governments are facing fiscal pressures they don't know how to solve for and needing to cut spending or increase taxes to balance budgets; the OBBA has massively increased the federal deficit. All this means the kind of fiscal stimulus response typically needed in response to recessions will be harder to do (if not impossible) and less likely to work. Even if a potential recession delivers wins to Democrats, then they're walking into the boobie prize of being in charge of things while people are experiencing the full pain of the resulting downturn with less ability to deal with it. And since all the Dems' big economic and domestic ideas over the past decade have involved spending that won't be possible without massive revenue increases that go beyond the parameters they've been willing to agree are acceptable, they won't be able to do anything they've been promising without completely new ideas and are totally ill-equipped for a climate of fiscal constraint. No one should forget that many of the roots of what we are seeing now in Trump and the larger political dynamics around younger voters and other groups moving towards the right date back to the fallout of the great recession. Democrats desperately need smart ideas for navigating the real economic challenges the country faces (including whatever happens with AI) and don't seem to have them.
I'm quite worried about a bubble. Maybe ignorance is driving my worry, but I'm still failing to see how LLMs are going to be so transformative so quickly. They won't be a complete non-factor, of course, but I'm still skeptical of some of the use cases claimed.
And I'm not even thinking about the politics here. But since this site does cover it, the motivated reasoning can cut both ways: Matt very much wants a dominant Democratic Party that can win elections on a regular basis akin to the New Deal era. But he is correct that there could be plenty of monkey's paw energy here: it's easy enough to concoct a scenario where a Democrat narrowly wins the presidency in 2028, but the bubble doesn't burst until during that Democrat's presidency, and it turns into a poisoned chalice election that instead creates an enduring *Republican* majority in the 2030s and potentially beyond.
>I'm still skeptical of some of the use cases claimed.
That's because at least some of the use cases claimed won't work out. That's normal and expected for any new tech platform with many possible applications. Very few startups (I use this term here based on age and stage, not size), even very successful ones, actually do anywhere near everything they claim they will on the timelines they project publicly.
A bubble is a stronger claim - it means too many of them won't work out, so that the companies can never grow revenue enough to become solvent once the investment dries up.
The bull case presented here seems pretty weak.
Many bubbles are’t predicated upon investors snapping up inherently worthless assets (beanie babies) but rather overly aggressive speculation in genuinely promising technologies. During the lead up to the panic of 1873, there was massive speculation in railroads, such that many were built to go literally nowhere. Similarly, in the lead up to the dot-com bubble, telecom companies built out a massive network of high speed internet cables that ended up being useless.
Both railroads and the internet obviously transformed the world. And, in the lead up to both crashes, there were some companies that were turning a profit. But the scale of investment outpaced the ability of most companies to service debt in real time.
It’s hard not to see the similarities to AI today.
"built out a massive network of high speed internet cables that ended up being useless."
They ended up being 100% utilized. Their timing was just a little off.
For sure. Which kind of strengthens the point that you can have a fundamentally promising new technology with an ultimately justified infrastructure buildout and STILL have a bubble
If you have an awesome new technology with a huge likelihood of being transformative almost by definition you're going to get a bubble. Which of course doesn't mean that the technology will prove to be a failure; typically just the opposite, with an unfortunate recession or panic briefly interrupting things.
The nifty fifty are also an instructive case for how solid businesses that are just very overbid can work out.
Re: the economy rolling over—
The biggest reason to think that this is happening isn’t AI entering bubble territory (although FWIW, capability growth clearly leveled off this year and the OpenAI spend that a lot of CoreWeave and Oracle’s projected data center buildout are financed against probably won’t happen unless they can get that going again very quickly; as with dot coms in the late 1990s, the underlying tech is economically valuable but there’s a good chance that related capex is overshooting), but much more boring movements in “normal” macroeconomic indicators:
1: ISM manufacturing PMIs have been in contraction for most of the year
2: Services PMI stalled to flat territory in the last print
3: Activity indicators in trucking and logistics are way down
4: Quit rates in construction (a typical leading indicator) dropped to levels not seen since the GFC
5: Retail and manufacturing companies started seeing significant margin contraction even last quarter, and FIFO inventory accounting effects are likely to make this worse over time
6: Employment in the most cyclical private sector categories is in contraction
7: We’re seeing significant credit default problems in cyclical industries (Tricolor and First Brands bankruptcies).
8: Employment growth in general has rolled over, and although we don’t have the latest NFP print due to the shutdown, ADP’s private sector payroll growth estimate is in the red two months in a row (after revisions).
And all of this is happening in a stagflationary policy-engineered supply shock (tariffs, shift to immigration restrictionism, permitting regime that’s increasingly and irrationally hostile to low-LCOE wond and solar power deployment)— and because of that, inflation has been accelerating at the same time that employment is rolling over, making it much harder to do a monetary rescue of the economy without causing really high inflation.
I'm not sure about capability growth leveling off. Those METR task-coherence curves are continuing to be straight lines, and IMO gold wasn't predicated by most markets until next year. Per Anthropic and OpenAI insiders they already use models to do most of their coding.
Robotics is super obviously still in the "picking low hanging fruit" stage, Sora / VEO also seem like they're in "straight line go up" world-model coherence.
I’d recommend looking more closely at the METR capability evaluation stuff. Even at the 50% reliability level, GPT-5 was a 50% improvement over o3 rather than o3’s more than doubling of its precursors’ times, and at the 80% reliability threshold, coherence length growth was a much more modest 20%-ish. And I understand that even the METR evaluations show some degree of Goodharting— real-world performance is less impressive. (Similar issue with the IMO problems and other benchmark scores.) And, well, although the exact figures are hard to come by, GPT-5 was likely at least twice as expensive to train as o3 (and possibly five to ten times as expensive) even though some architectural improvements likely made each unit of compute involved cheaper.
I’ve also gotten a lot more careful about how much I credit OpenAI insiders’ claims about what they’re cooking— GPT-5 was much less exciting than like, roon’s posts led me to think it would be. These guys are very smart and they’ve done some impressive work, but they’re also talking their books.
(FWIW, I think that AI is going to be a very economically important technology, and that we likely eventually see further scaling economies with more technological breakthroughs. I definitely think that AGI is possible and am concerned about its potential consequences. But OpenAI’s current business model, cash burn rate, and capex rate require them to succeed very quickly and smoothly.)
To bring it back to AI, did increased capex (and Trump's moves to boost asset prices) fill a demand shortfall? Or did these moves just raise asset prices while increasing exposure to AI across the financial system?
That’s a good question. I’m not 100% sure, but I think that it helped maintain foreign equity investment inflows into the US (hedged with short positions in the dollar).
This all suggests a slowing economy. Would that also suggest a recession? Don't recessions tend to have an identifiable cause? (E.g., the housing bubble popping, the Volcker interest rate hikes).
What would be the identifiable cause here?
The president enacting the biggest tariff increases since Hawley-Smoot and driving net immigration down to 2020 pandemic-like levels. And then on top of that, we might get a credit crisis (driven by defaults on high-yield loans to subprime auto borrowers, underwater commercial real estate holders, and small and medium retail and manufacturing businesses with tariff exposure— were already seeing this with the TriColor and First Brands bankruptcies, with regional banks like Western Alliance and Zion’s also reporting issues with their debt today), a bubble burst in crypto (enormous liquidations during a relatively smaller drawdown last week revealed just how leverage-dependent that whole expansion has been), and then maybe also the AI bubble popping (with consequences ranging from total wipeout to painful but survivable multiple compression for the major firms involved).
Even Krugman has long said that tariffs are highly unlikely to cause a recession.
Are subprime auto borrowers that big a thing, like the subprime housing was in 2007? Seems highly unlikely.
I could see a crypto bubble popping causing a financial crisis if the big financial entities get highly leveraged in crypto but we're far from that. It's still limited to the sharks and the marks.
Yes, the AI bubble popping could propel us into a recession.
I mean, the leading indicators have been screaming “contraction” since the initial tariff announcement in March and employment growth has collapsed, which is very strong evidence that the economy is in fact slowing down.
Re: credit bubbles— small and medium enterprise lending, commercial real estate, and auto loans collectively amount to double-digit trillions in total loan value in the US alone. A significant increase in defaults could easily blow up a lot of different financial institutions, and if lenders get scared and tighten underwriting, these demand impact from credit drying up could get pretty severe.
Matt is wrong in a very important way about this as a way that the investment thesis could be mistaken:
“Increasingly capable models might do something harmful that destroys rather than creates economic value. The Entity from the “Mission Impossible” movies was surely bad for stock prices, despite its impressive technical abilities”
No. This doesn’t serve as a counterargument to investment for the same reason that it doesn’t make sense to buy end-of-the-world insurance: in the case that you’re right, you’ll never collect. Investors who think it’s 90% likely that ASI kills us all and 10% likely it doesn’t but remains the most important technology in all of history would be acting monetarily rationally to be long AI because there’s no way to short the human race.
Not to mention if your main AI fear is massive job loss you want to invest in AI as much as possible as a hedge against the loss of your job.
"there’s no way to short the human race."
The Fountainhead reference?
"I play the stock market of the spirit and I sell short."
--Ellsworth Toohey:
A less capable AI might cause people to stop investing in AI without destroying the world. And the other 3 scenarios Matt gave are valid. More prosaically, we might find that AI has a hard time turning a profit for mundane reasons, for example because bottlenecks shift to other places, or AI companies or the businesses that utilize them have a hard time capturing the value they generate.
I feel like the fact that you have to be quite old to remember normal recessions makes people a little to blasé.
I was in the labor market during the dot com bubble bursting because I went into work out of high school for years. It was nothing like the Great Recession or Covid slump. Obviously you would rather not have a recession than have one but many Presidents have survived recessions. This isn’t to carry water for this administration but just to point out a recession isn’t a get out jail free card.
The nature of recessions has changed, they used to be a healthy flushing of crap from the business world but we have been abnormally propping up the economy with tax cuts, crazy low interest rates, and stimulus spending since the Bush admin. While the GFC and Covid were difficult periods we haven't resolved the underlying thesis of grandiose spending to solve everything. Just look at how the stock market was hungrily circlinging a 1/2 pt lower interest rate recently, anticipating a return to the 'good ol days' of cheap money.
Ok there Andrew W. Mellon.
>many Presidents have survived recessions.<
Yes, if they get them over with sufficiently early and enjoy a strong recovery (ie, Reagan, FDR 2nd term, Eisenhower, JFK/LBJ, Nixon, etc). Otherwise, (eg Carter, Trump 1), not so much. Timing is the key. Democrats pining for a recession had best not hope we're already sliding into one, because that would make for uncomfortably high odds that the Republican nominee in 2028 is enjoying a strong expansion: the average recession since WW2 has lasted about ten months, and even the two longest ones were over after eighteen months.
Perhaps, but Trump is also actively trying to murder the economy in a way basically not seen from any American president in the modern era.
>Trump is also actively trying to murder the economy<
Zero doubt of that.
I'm just saying that, for purposes of a restoration of sanity and democracy, it would be better if the patient died later rather than sooner. An *early* recession is Josh Shapiro's (or Gavin Newsom's or Gretchen Whitmer's or AOC's or JB Pritzer's) worst nightmare, because a recovery starting some time in the first half of 2027 (or earlier) makes Vance (and yes, I do believe it'll be Vance) all the harder to beat.
I believe if the administration's economic vandalism doesn't really begin to catch up with them in a major way until, say, the latter part of 2026, the eventual Democratic ticket stands a very good chance of being in a strong position for 2028 (because in that scenario, even a normal recession probably means a recovery doesn't get under way until almost the end of 2027). I mean, there are a number of things about the current administration normies are uncomfortable with: add surging joblessness to the mix (assuming we still have fair, competitive elections), and the Democratic nominee should be favored to win.
I almost wonder sometimes if various players in that administration secretly *want* to engineer a recession as soon as possible, for their own purposes.
And yes, as a matter of fact I do think economic fundamentals still matter a great deal: 2024 reinforced this in my mind (not just in the US but in many countries).
Just chiming in to agree. Being old enough to have seen this dance through a few decades, the electorate is most sensitive to the rate of change---going into an election recovering from a terrible recession is better for incumbents than entering a mild recession from an historically strong economy.
Though I tend to think that this administration is best understood as thinking zero steps ahead and living entirely in the present; front-loading all the trauma may have the effect of setting them up to run on a "things are getting better" (because we cratered the economy last year) but I feel that is the sort of thing they would figure out in the moment and then come up with a pithy slogan for it.
I'd rather Democrats take the house and senate in 2026 than increase their odds in 2028.
I also think Rs will turn on a dime
to try to inflate the economy if they think voters are souring due to macroeconomics. This is what's behind the attack on Powell and the fed - the admin wants to be able to turn QE on like a tap if they need to.
They'll definitely try to move the economy to help them if they can.
You don't have to be that old to remember a sluggish job market throughout the 2010s, even if it wasn't a per se recessionary time.
I mean sure. But that mostly fits my broader point. Obama won reelection. It wasn't great but it didn't function to make the opposition’s job easy.
The only official recessions most millenials and Gen Z have known were catastrophic black swan events. Which really did substantially shape the 08 and 20 elections but it’s not conclusive and if you extend it to include rough patches it’s even less conclusive.
I find it funny how Trump sells tariffs as a great growth strategy and then proceeds to give tariffs exemptions to the crown jewel of American economic growth.
This makes more sense when you realize that tariffs are just a tool for corruption and power consolidation for a wannabe autocrat, but it is funny.
I don't know if it's a bubble or not either, but the fact that Sam Altman is bragging that the next innovation for chatGPT is dirty talk leaves me with some doubt (read to the end).
https://x.com/sama/status/1978129344598827128
"We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).
In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults."
It has always bee thus. In the fiber/networking boom of 2001-2005, I was at trade shows where speakers calculated that 30-50% of bandwidth was going to Napster (music-sharing) and porn.
Nowadays more is dominated by bots than makes sense.
Edit: I should add, that these bots are all too often being used to inflate other online markets. Either through fake engagement, or by trying to make a better search tool out of AI.
something to the effect ... We are really close to solving cancer but in the meantime, here is a horny chatbot.
This is the abundance agenda we need????
No obviously we need this combined with humanoid robots
To make sex bots!
Isn't there some axiom about all technology being primarily used for porn? That's certainly where the advances will come from.
There's a movie about this. https://en.wikipedia.org/wiki/Middle_Men_(film)
Not sure it's that porn pushed forward internet technology per se, but I think it plays a key role innovations behind internet commerce and basically how to make money of the internet.
More that porn drives initial adoption, as seen in VHS/DVD/Blu-Ray and more recently, VR headsets.
I think that used to be true but is it still? MindGeek (or whatever they changed their name to) surely makes a ton of money, but they aren't driving the economy.
Aren't there many technologies that reach their takeoff point (both in terms of capability and revenue) when they figure out how to apply them to the porn market?
AI and porn are such a marriage made in heaven that I'm surprised it took this long.
It didn’t occur to me until but this might also makes sense as a way to prop up some political support (among men at least).
Subconscious monologue:
“Sure my job and investments seem more precarious now but wowsies I can’t believe how hot and exciting my TriciAI gf is.”
I hope this is a sign that they're genuinely solving some weak type of alignment and not a way of staying in the news at the end of a relatively disappointing year
I think your hope is wildly optimistic.
I just tried to use ChatGPT for an hour last night to do a simple task - taking names from a list of publications in a Word file and putting them in a spreadsheet (a little more complicated but not much). It failed so miserably that I started cursing at it, at which point it refused to engage anymore. 😆
And here I am wishing AI were a bubble because I don't want it to drive the value of human labor to zero, like a chump.
AI could be universally better than people at everything and humans would still have valuable work. Bangladesh makes clothes even though the US can make them better, and Walmart hires elderly greeters even though celebrities could greet people better.
What a bright future Sam Altman is bringing us!
"Bangladesh makes clothes even though the US can make them better"
Drone strikes on very definitely, 100% for sure, narco-terrorists operating out of Dhaka textile mills when?
We'd also not all die which would be nice.
That would also be very nice
I'm not sure Matt's wrong, but I will say anecdotally that some pretty right-wing people I know are also concerned about the possibility of an AI bubble and looking to move their money elsewhere.
I thought all their money was already in crypto. You know, for safety.
Yeah I think there’s rational reasons to want to take some profits and diversify. Into what I don’t know exactly. Everything seems to have boomed in the last few years except maybe healthcare stocks (?).
I’m genuinely confused and hoping someone who knows more than me can explain.
It seems like there’s strong evidence that scaling has broken down. AI developers appear to have exhausted the supply of high-quality training material around 2023, when ChatGPT-4 came out. The compute used to train GPT-5 was reportedly 20–50× greater, yet the result is only moderately better.
So why are sophisticated companies still pouring billions into new compute infrastructure if scaling returns are collapsing and the current models aren’t yet capable of doing much real work?
My naïve play would be nesting LLMs within other programs that reason more precisely. I suppose AI coding is improving quickly enough that possibly all this new compute could be used for iterative self-improvement.
What am I missing?
The potential revenue from being the first to reach AGI are theoretically so monstrous that it's rational for these companies to keep investing in it, even for a 1% chance.
I am interested in anyone explaining how close we are to AGI. Lay out a model for what it will take for AGI and how close are we? How many 'ideas' need to be represented to achieve AGI? How many 'ideas' are currently represented within today's LLMs? That is the type of discussion that would allow me to reason if the AGI argument is real or just marketing fluff.
AGI is where AI can improve itself faster than humans can; we aren’t there yet but it is possible we could get there some day. It’s just hard to imagine having an AGI that is also effectively enslaved by a particular AI company and used to generate revenue for it.
I truly don't understand why they think they'll necessarily control it; these guys are building it because they watched movies and read books. We know what happened in that media.
That is moving the goalposts on defining AGI. But even using your definition. What is the mechanism that it uses to improve itself? What does it need to be able to be at that point? What is a model for how close we are to that?
My point would be that the current LLMs don't represent any data in a way that it could self improve and that they are not storing data in any real way that could get to AGI. The use of human intelligence terms like reasoning when talking about LLMs means that people don't have a clear idea about their current limitations, limitations in future growth rates as well as limitations in absolute capabilities.
> What is the mechanism that it uses to improve itself
It has the source code and settings and training data that was used to built itself. It looks at it and creates another revision and evaluates whether it's better or worse. Provided inputs of energy this could happen without human intervention. If it's better that new one takes over.
(I'm ignoring if the AI would ignore the alignment problem of its own AI creation having value differences with it.)
That posits capabilities that the LLMs don't possess. Just because all these companies and researchers are using human terms doesn't mean they are actually doing it. While the finely crafted mirrors that are the current LLMs seem to do human things, they have no reasoning.
I think you and everybody else would like to know that. Even the people working on this stuff have very different ideas about the timeline. I don't trust the Altmans of the world to tell the truth. But even the really smart people who are immersed in the development don't agree.
The returns on capabilities to scaling are steady, holding to stable patterns. The patterns are logarithmic, so linear returns to exponential inputs is exactly what they predict. And I assume you're comparing GPT-5 to 4 for that multiple? Because GPT-5 used less pre-training compute (I'm still not sure why they switched from calling it 'training' to 'pre-training') than 4.5, because that's no longer the most efficient way available to improve performance.
The training data problem exists but the companies knew about it before 2023 and have been following a series of other paths forward. Optimizing data. Post-training methods like SFT, RLHF, DPO. Chain of thought. Reasoning models. Better system prompts. Tool use, agent harnesses, and other scaffolding to help the AI actually use its capabilities (aka you don't expect a human to solve every problem in their head based on a single instruction on their first day at work).
There's also distillation, which is when you train a powerful model that's too compute-expensive to use for everything, and use it to train a smaller model optimized to have higher concentrations of desired capabilities compared to if you just trained the smaller model directly. Like how undergrads are better at solving problems after a professor teaches them, but you wouldn't hire a professor to solve every problem.
One thing you might be missing is, do we need more than that? Where are the capability thresholds that matter for profitable adoption? The metaphor I usually use is that 3-4 years ago the best models were like elementary school students. Now they're bright undergrads or grad students. How many more years of that form of scaling do we even need for big practical impact? There's still a lot of capabilities spikiness and other problems to figure out, but those are different directions of improvement than the kind of scaling you're talking about.
Also, I'm not sure what your use cases are, but I disagree that GPT-5 is only a small advance over GPT-4 in terms of practical impact. And if you haven't tried it, my current preferred model for most things is Sonnet 4.5. Also, most people use LLMs really poorly in my experience. The difference between the results you get from an average prompt with no context and what you get from a well-crafted prompt with sufficient context (aka the kind you'd give a human if you expected success) is enormous.
I spend alot of time chatting with Chat GPT, I generally prefer it’s personality to humans. I certainly prompt it in different ways.
If I give it an interesting seed, it can smooth out the tone abd make my writing better, look up examples, proof read, polish the tone, etc. However, it rarely has good ideas, it offers good phrases that a human brain can manipulate to improve his writing.
That's true. They've hardly surpassed us at that (yet?).
IMO, humans also rarely have good ideas. We give them a lot more time and shots-on-goal before they come back and tell us their ideas. and there are a lot more humans running around who might sometimes have them.
When people say chet gpt can’t write a legal brief because it sometimes makes reasoning errors or hallucinates, it seems relevant that 63% is a passing score on the bar exam. Most humans, even law school graduates, are not great at legal reasoning.
LLMs are already better at writing than the median high school senior, they could probably get a B- in many college classes with light prompting.
My only point is LLMs still can’t produce a better first draft than I would given reasonable time. This is still true for a material number of human writers, but the number will fall every year.
AI is already a very good human editor, maybe not New Yorker level, but good enough for every day stuff.
This makes sense. I'm not a particularly good writer myself, outside specific kinds of professional writing. I find AI writing with a well-structured prompt to be passable for many purposes, but definitely not all.
I'm curious what kinds of prompts you've used to generate first drafts? Or if you've tried any more complex setups like AI agent teams, where one LLM supervises and edits the work of another, or multiple LLMs work on different tasks and sections and then another combines the pieces and another edits it.
I do find, "Wait, does this rule out (most) humans?" to be a useful thought experiment when interpreting any argument of the form "AI can't do X."
Legally, AI is best when you spot the issue and tell it what to argue. It’s absolutely excellent if you get surprised in court but know enough to spot strong arguments and just need authorities.
I’ve found that when I give it a paragraph or two of facts and ask for “the best argument that Y, supported by citations to the official code of georgia and georgia appellate decisions” it generates excellent, transparent first drafts. You absolutely need to cite check them, but a partner really should cite check an associate written brief. If well prompted it is as useful as a second or third year associate, but you have to know a fair amount to spot the issue on your own. It’s issue spotting is rather dubious.
GPT 5 has, ironically, degraded user skill at using AI at my company. It’s much better at handling bad prompts, so people have decided (wrongly) that prompt and context engineering is no longer valuable.
I've noticed this, too. Earlier this year I helped build a training session series on LLMs for my coworkers. The overall effect was a 20% improvement in efficiency on certain types of tasks. But, that was 0% for some people, 80% for others. User skill still makes a *big* difference.
Although, even before this, very few people put much effort into how to build good prompts, not even using very basic strategies like "ask the LLM to optimize the prompt for you before answering it" or "explain why you're asking and what kind of answer you'd like."
From back when o3-mini was at the frontier, there was an ASX post (https://www.astralcodexten.com/p/testing-ais-geoguessr-genius) on how Kelsey Piper used it to do some amazing things in GeoGuessr (she shared her chatgpt prompt for this at https://x.com/KelseyTuoc/status/1917350603262681149). That prompt was about 1100 words long. In contrast, a picture plus "Where is this?" does not get you much even now.
When OpenAI released Deep Research and people were trying to figure out how to use it well, this person (https://x.com/buccocapital/status/1890745551995424987?lang=en) used O1 Pro to create an optimized prompt to get Deep Research to do deep research on Deep Research prompting strategy. He gave the resulting report back to O1 Pro and it used it to generate an optimized prompt template for future Deep Research prompt optimization. Promptception matters; the models understand what kinds of approaches will tend to yield what kinds of results.
Aren’t they predicting exponential increases in capabilities though? Certainly that seems to be what “AGI in 2027” would be.
Yes... but not in 2025. RSI could do that, but that's from algorithmic progress, not (just) compute scaling.
Also it's good to define what "exponential increase in capabilities" means more precisely. "Double task horizon time every 4-7 months" seems pretty exponential. Inference costs have been falling by 1-2 OOMs/yr for a given performance level, which is also exponential - so far we just care more about better performance than lower cost.
From what I've heard a large majority of code at Anthropic and OpenAI is AI-written, presumably at least sometimes with not-yet-publicly-released models and more compute. The humans are still providing the direction and evaluation. That changes where the bottlenecks are, but I'm not sure how much of a speed-up it actually is in algorithmic progress yet.
Also, my understanding of predictions like "AGI in 2027" is that the people saying this see it as their modal year - aka the year in which it could happen if each individual thing goes the most likely way (aka 'right'). Those same people are often on the record saying that they understand that some of those things will not go right, even if they don't know which ones, and so their median years for AGI are substantially further out, sometimes by as much as 5+ years, with significant probability on it being 15+ years later (see https://ai-2027.com/research/timelines-forecast).
If you assume that the way forward is "AGI" and the way to get there is "train bigger models" then the current strategies make no sense. One of the big breakthroughs that Deepseek implemented well was the "mixture of experts" which rolls a bunch of small models into a bigger "LLM". These smaller models are trained on very specific tasks and act as routers to direct inputs to the best sub-model(s) for the task. This is a highly efficient approach because you don't have to load a giant model into memory and run inference on the whole thing for every input token; and you don't have to train a giant model to do everything-all-at-once. You do, however, need tons of compute to fine-tune a bunch of small models, test it and run inference at massive scales.
AI coding is a good example of how breaking the problem up into smaller parts can be very effective. You can have an "LLM" that comprises smaller models that each good at a different aspect of a different language. Another can be good at calling certain tools like git or sed or searching the internet or whatever. The marginal return of training bigger and bigger models on all the programming languages at once gives way to training super-specialists that just know how to write networking code in Rust or fastapi in Python, etc.
Eventually if you glue enough of this stuff together in clever ways you get something that feels much more capable and intelligent because when you ask it a question it is very good at finding just the right sub-model that is also very good at what it does. You're still burning tons of compute on inference, but it's spread out (physically, in a data center, conceptually in MoE, etc.).
what’s your background?
Professional scientist, lifelong computer nerd.
There is a ton more data to be had. As Geoffrey Hinton said recent models have figured out the world from just text which is very compute intensive. The next step is to have them explore the physical world.
can they do that efficiently with presently available technologies? i suspect causation is illusory and the world is too
physically complex to understand deeply simply based on computation
True, although I have read they're now also using audio and video data in pre-training, not just post-training, for multimodal models. Robotics can offer totally different types of info and feedback (haptic? better causal reasoning? planning?). It's amazing how well models have grown to understand physics (as shown by video generation models' progress) without that.
Doesn't necessarily matter if the world or causation is illusory - the illusion, at least, still exists and we can interact with it, therefore it can be understood.
Have you ridden in a Waymo? That would be one of the premier uses of AI in a 3D environment.
I think you miss the point of Waymo's system design and their use of AI. They have constrained the problem that the AI portion has to do to very narrow problems. And thus they can work issues in a clear way with discrete safety systems as a backup. Thus for the driving, they have applied the Floridi Conjecture to narrow the scope to increase its certainty.
Define very narrow problems. The Floridi Conjecture just means the AI operates like a human driven car with pedestrian detection, automatic emergency braking etc.
I have not.
It's very impressive as the AI can identity what it "sees" using video and LIDAR but also predict the behavior of that person or object. When you're in the back you see it hit a ton of edge cases where it is able to deal with people and vehicles doing weird things as it "knows" what things are and what they are going to do.
Ok, one more comment. The issue is that you have to be able to explain what you want to a "brainless robot" (Judea Pearl's term). We understand very well how to explain to a brainless robot what it means to do a good job at predicting an outcome (in the LLM case, the next word) given some covariates. It can take those instructions and work really hard at learning how to do that (searching through all possible prediction functions in a certain space, seeing how they do on the data data).
For reasoning, we barely understand how to explain it to other humans. And we certainly haven't formalized it well enough to be able to explain it to a brainless robot.
(Caveat: I have vaguely heard about attempts to formalize reasoning for computers so it's possible someone claims that I'm wrong here. However, there is a difference between a couple of groups attempting to do something and having some limited success and a decades-long proven research project that has been undertaken by thousands of individuals, which is what statistical machine learning is.)
What you're missing is that we have a strong evidence base, going back decades, that shows that more data + flexible prediction algorithms lead to more precise predictions. We have no comparable evidence base around reasoning, and so nobody has any particularly good reason to believe that that would work.
Do we have any data relevant to LLMs before chat GPT 2? It seems safe to believe compute will continue increasing for a while, but what basis do we have for predicting what adding compute to a world historically unprecedented baseline will do?
My read is that the data is going to be the binding constraint. It's very clear from the statistical machine learning literature that increased model capacity requires more data to train. I don't think that there's any reason to believe that trying to use something more flexible/complicated to train on the same data set will lead to improved performance.
Of course, I'm just a statistician, not an AI researcher, so what do I know haha
As an AI researcher, I'm on team 2 - there is no moat. I've spent my entire career about 4 years behind the bleeding edge. What used to take a million of dollars in hardware and data labeling costs and world-leading research minds now only takes $50,000 for hardware/data costs and any old software engineer who got a 780 on their math SAT.
Do you mean because people can distill or transfer cutting-edge models, or because so much of the research is public, or something else?
Both of those, though I'd say public research is more important, and is getting rarer and rarer. I'm glad Meta is still committed to open source.
Also Moore's law. V100 GPUs have gone from a hot commodity to leftovers that no one wants to use anymore. But the cost version of Moore's law is way slower than the transistor version, maybe cutting in half every 4 years or so.
The AI boom is more financial than physical. The ten largest American AI companies together are worth about $21 trillion, yet the actual wages paid to U.S. workers building AI-grade data centers and cooling systems amount to only $5–10 billion a year. Even the cash leaving the country for chips—mostly high-end GPUs fabricated in Taiwan—is only $30–40 billion annually. By contrast, at the height of Victorian Railway Mania in 1846, roughly five percent of all British wages were paid to engineers, navvies, and the workers supplying steel, timber, and stone for railways. That was in an economy only just escaping the Malthusian trap, with little surplus labor to spare. The railway boom left high-quality mainlines that remain in service to this day, an accomplishment that survived an eighty-percent crash in railway stocks. The infrastructure being built today may have even greater effects even if it’s relatively cheap to build.
“…in 1846, roughly five percent of all British wages were paid to engineers, navvies, and the workers supplying steel, timber, and stone for railways”
How much of that was coming from Taiwan?
Much the the timber came from the Baltics.
The timber for RR ties and trestles is akin to the poured concrete floors in the shiny new LLM data centers. Taiwanese graphics chips are the locomotives.
This is a productivity bubble. There will be a lot of innovation and change, but well work out who the winners and losers are pretty quickly. The danger for the market is that there will be secondary losers: the companies building out infrastructure for this that could be left holding the can.
As for AI, it's slop. It needs to get way better. I honestly don't know if it will be but in its current form it's unusable for business.