305 Comments
User's avatar
Tdubs's avatar

I already find these AI risk columns tedious and repetitive. Matt might as well have written multiple essays on how the Reasonabilists have new convincing arguments about how imminently Zorp the Surveyor is coming to destroy the world.

Expand full comment
Morgan Lawless's avatar

I’d like to chime in and say that I do find the AI, philosophy, and EA columns interesting.

Expand full comment
David R.'s avatar

I’ll note that none of them has ever put any meat on the bones of this supposed threat.

We’re just supposed to take on faith that a single meta study on the topic is right as opposed to having concocted some bullshit metrics, measured them, and pointlessly extrapolated them forward.

EDIT: Having skimmed the paper in question, I was wrong. They actually concocted *ONE* bullshit metric, measured it, and pointlessly extrapolated it forward.

XKCD covered this two weeks ago: https://xkcd.com/2652/

Expand full comment
User's avatar
Comment removed
Aug 17, 2022
Comment removed
Expand full comment
David R.'s avatar

The concept of a drive-by evisceration and the phrase "low-key" should not appear in the same sentence.

Expand full comment
Belisarius's avatar

Eh, I can envision some riding in a sensible Camry, going at a moderate speed, leaning out of the window while brandishing a modest kitchen knife and eviscerating someone.

Expand full comment
David R.'s avatar

A chef's knife at 35 mph is still going to do a lot of eviscerating.

Perhaps as much as a kataphraktoi lance at 25 mph, general.

Expand full comment
Belisarius's avatar

"Perhaps as much as a kataphraktoi lance at 25 mph, general."

Oh be still, my heart!

Expand full comment
davie's avatar

Or Roko’s Basilisk.

Expand full comment
Eli's avatar

Roko's Basilisk is one of the three silliest arguments I've ever encountered, along with St. Anselm's ontological argument for the existence of God, and the claim that Islam isn't protected by the 1st Amendment free exercise clause because "it's a political movement, not a religion". In all three cases it feels like the best rebuttal is "what are you, some kinda comedian? Get outta here with that crap."

Expand full comment
User's avatar
Comment removed
Aug 17, 2022
Comment removed
Expand full comment
Belisarius's avatar

It's a religion stand-in for non-religious people, and it takes on most of the negatives of a real religion.

Expand full comment
User's avatar
Comment removed
Aug 17, 2022
Comment removed
Expand full comment
Belisarius's avatar

Well, oppose and be eternally tormented. =)

Expand full comment
Sharty's avatar

I had to Google the term, and Wiki gave

> Roko's basilisk was referenced in Canadian musician Grimes's music video for her 2015 song "Flesh Without Blood" through a character named "Rococo Basilisk" who was described by Grimes as "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette". After thinking of this pun and finding that Grimes had already made it, Elon Musk contacted Grimes, which led to them dating.

because of *course*

lol.

Expand full comment
Andrew Clough's avatar

Has anyone ever taken that argument seriously except as a thought experiment critique of some decision theories?

Expand full comment
Cwnnn's avatar

I found this tidbit while googling Roko’s Basilisk because I hadn't heard of it before

>Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[4] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.

https://rationalwiki.org/wiki/Roko's_basilisk

lol but also yikes

Expand full comment
Nick Y's avatar

I think collapsing this to the short term question is interesting enough if Matt actually has something to say about the short term arguments. The issue here is he doesn’t so the column just moves on to argue that AGI alignment obsessives are better than some other rich people which … maybe? ‘Hey at least it’s a good faith effort to help’ as a defense of this kind of ‘EA’ does have a certain ironic appeal to me.

Expand full comment
loubyornotlouby's avatar

I'm not sure Matt would ever say, but I think his point about being willing to give AI Alignment issues / policy a fair shake...but then finding that behind the scenes no one can tell me what policies he should write about (and low key look down on utilizing Terminator as an analogy, which he thought was good) is sort of a Sub-Substacking (in lieu of a Subtweet) shot across the bow for the folks who can't stop talking about AI Alignment...but hey, maybe I'm wrong here in my candlewatching...

Expand full comment
Morgan Lawless's avatar

On the Terminator point, I was pleased to hear this from Will MacAskill on the latest 80,000 Hours podcast:

> And so this is the scenario kind of normally referred to as “The Terminator scenario.” Many researchers don’t like that. I think we should own it.

https://80000hours.org/podcast/episodes/will-macAskill-what-we-owe-the-future/?utm_campaign=podcast__will-macAskill&utm_source=80000+Hours+Podcast&utm_medium=podcast

Expand full comment
loubyornotlouby's avatar

I mean, so much of EA's core problems is that they don't want to "own" how they are just as selfish as all other charitable givers...that they created an entire movement that sells itself as being hyper rational and technocratically governed towards reducing suffering in the world...and now it's slipping out of their fingers with folks getting greedy because rationalizing their desires allows them to now to get hypothetically assigned to some SBF funded think tank that has $500K salaried positions to basically just think about AI risk stuff until they die of natural causes...

Expand full comment
Nick Bacarella's avatar

As someone involved in EA causes, I don’t know any 1) making $500k or any significant fraction of that sum nor 2) using salary as the primary determinant for their career choices. You’re picking apart a strawman, and having read some of your other comments, it seems you have some axe to grind with the movement. I’m always happy to engage with critiques of EA, but those made in bad faith are functionally useless.

Expand full comment
Morgan Lawless's avatar

> some SBF funded think tank that has $500K salaried positions

Is this real thing?

Expand full comment
loubyornotlouby's avatar

no, the point is that people interested in these topics would love to get paid to work on talking about them for a living, and so there is selfish incentive to capitalize on people like SBF and other rich donors interest in the same subjects and all that interest alignment goes mostly undiscussed…but clearly people are getting paid to think about this, look at the chart in the piece.

Expand full comment
loubyornotlouby's avatar

But you can see that The Groups (as lamented in other non EA related posts on SB) have expanded to AI Alignment and are legit hiring folks away to spend all their time thinking about this (this was just the first announcement that popped up in my feed, not sure what his salary is, it’s not important, but it is full time and multiple people)

https://ai-alignment.com/announcing-the-alignment-research-center-a9b07f77431b?gi=7f383f6f62f6

I ask whether or not that investment might what the human capital investment into String Theory was for the last century… the opportunity cost is not zero across multiple areas…

Expand full comment
John from VA's avatar

They're just not as engaging as Engage with Zorp.

Expand full comment
Nels's avatar

I don't blame you, but I doubt we will see many more of them since, as he points out, there aren't a lot of policies or tangible things to achieve in this space. Personally I like shaking things up. One can only write about Nimbys and Joe Machin so many times after all.

Expand full comment
User's avatar
Comment removed
Aug 17, 2022
Comment removed
Expand full comment
RobertTS's avatar

I enjoy the philosophy columns!

Expand full comment
Eli's avatar

"[The fish donors and pro bono death penalty attorneys] have decided, based on either their considered ethical view or a lack of adequate reflection, that maximizing the number of lives saved is not the most important thing."

I think this claim gives too little credit to the donors and attorneys, who probably wouldn't disagree that maximizing the number of lives saved is the most important thing. I think their reasoning – or, at least, my reasoning if I were in their shoes – would be that maximizing the number of lives saved isn't *their particular part to play in the betterment of society*. There is clearly no shortage of vital work to be done; I've figured climate change is a major threat to human well-being and have decided to try to make a career of fighting it, using what I think are my most valuable skills (research, communication, math, generally holding a lot of facts in my head). This isn't to disparage other priorities – it just seems like this is where I can help (it turns out employers disagree, though, so who knows).

One of the things I find most grating about social-justice leftism is its transcendental-meditation-esque belief that we can and should have a planned attention economy, wherein everyone focusing their attention at the same time on e.g. Covid-19 or racial justice is necessary and sufficient to fix those problems. It would be a bummer if EA turned into just a mathed-up version of that. Not only are we never going to achieve worldwide consensus ordered prioritization of all good works that need to be done, there's a point of diminishing returns where by turning everyone's focus towards one thing we lose the gains from specialization. This isn't a criticism of EA as a philosophy, which is a valuable corrective to the norm of locally focused giving, as Matt points out. But it is a warning against the risk that the EA movement trips over its own feet in the future by not telling individuals to factor the existing landscape of philanthropy into their calculations.

Expand full comment
Kenny Easwaran's avatar

80,000 Hours, the EA career advice site, very much emphasizes the point that each person should find the best way for them to contribute. Several years back, they noted that “earning to give” was a good strategy for some people, that perhaps not enough EAs were pursuing, but now they emphasize a lot more finding a job that makes effective use of your individual skills.

Expand full comment
lindamc's avatar

"It would be a bummer if EA turned into just a mathed-up version of that."

I love this framing, as it describes a feeling I get when I read/think about EA but have been unable to articulate, even in my head.

Also, I think most people would object to the notion that "the norm of locally focused giving" needs correction. I think there's a difference between trying to improve/help people in your community and donating a bunch of money to an Ivy League school or another prestigious institution (like Central Park). I get the large-scale utilitarian moral argument, but I don't think that resonates with a lot of people. I see the logic, but I find something about it somehow bloodless and unpersuasive. Maybe I'm not rational enough.

Expand full comment
Marie Kennedy's avatar

Agreed- there are positive, pro-social reasons people are wired to care more about their flesh and blood neighbors they see daily than an anonymous stranger on the other side of the globe. And that is the piece that hyper-rationalists seem to miss: sentimentalism and emotional bonding is how humans form relationships, and relationships make societies, and societies make humanity. So, like, don’t knock it? But also, do your thing!

Expand full comment
Marie Kennedy's avatar

Like, I’m sure I’m strawmanning them, so forgive me- but would an EAist say “My kid needs a surgery, I could spend $100,000 to save his life or I could buy 100,000 malaria nets and save 100,000 lives, so I’ll let my kid die”? I’m sure no one would say this?!

Expand full comment
REF's avatar

EA is about what you do with charitable donations. If the person with the kid who needed surgery was considering, instead, using the money to fund an aquarium then, yes, he should buy malaria nets.

Expand full comment
Marie Kennedy's avatar

Ok, fair. But, like, the same instinct that leads one to say “Id normally donate $15k a year for malaria nets but this year my friends kid needs surgery and I want to help them” is related to the instinct to want your charitable donations to make your own community better (in the form of aquariums, for example). I just don’t think EA captures the systemic utility of people caring significantly about their immediate neighbors?

Expand full comment
REF's avatar

I agree but there is a kind of scale of grey here. It isn't entirely charity to give money for your friends kid. As you said, "I want to help them." Certainly EA has gone far afield from its initial intent but at some point it must have been, "When you can't figure out how best to donate money...."

Expand full comment
Bob Eno's avatar

I think this is an important point, lindamc. EA, as I understand it, frames ethics in utilitarian terms as doing maximal good, and providing ever-improving means to do the utilitarian calculus of good-creation-value beyond what any individual can do. So ethical life can realistically, and not just theoretically, reduce to "obey the calculus," at least in terms of how we act ethically with money.

But the reason people want to be ethical (to the degree they do) has to do with feelings about their dense embeddedness in an array of social networks that shape their identities and narratives about themselves--ethical lives are woven out of relationships and narratives. Utilitarian calculation can play a role, but it's a bloodless one, especially of we outsource the calculus to a third party, that is not in isolation going to provide a large proportion of people with an emotionally satisfying ethical life. I think (more or less based on my own experience of being a person) that ethical theories like emotivism, intuitionism, virtue ethics, duty-based/deonotological ethics, specific religious/dogmatic ethics all reflect tools we actually use in life to fulfill personal needs for meaning and social self-definition.

Trying to strip human ethics down to a one-dimensional imperative with a mechanical, quantitative solution may be a satisfactory approach to ethics, but not to human ethics. Reason is powerful and self-justifying (since it sets the terms for justice), but reason alone can only provide an impoverished version of experience.

Expand full comment
Tom Hitchner's avatar

Isn't a lot of EA argumentation trying to convince people to look at things one way instead of another way? If we were able to reframe our views and attitudes such that we found comparable satisfaction in giving to these larger-scale causes, then that wouldn't be an impoverished existence.

Expand full comment
Bob Eno's avatar

That's a good point Mr. Hitchner. But I think there are some flaws in it. The implied premise is that argument is generally a sufficient means to alter sympathies nurtured in childhood through family/neighbor contexts, and it is not at all clear this is the case . Another problem is the argument itself: it's not at all clear that utilitarianism so far superior to all other ethical approaches that it should be adopted as an exclusive approach. Moreover, even if it were possible to reshape people's psychologies so that they were as satisfied by impersonal long-distance giving as by giving that benefited their own communities, would it be wise to aim for that result without knowing whether such reengineered commitments might entail attenuation of prosocial virtues such as loyalty, compassion, and the fidelity to webs of obligations and duties that underlies honesty? After all, the comparability you're aiming for may more easily be achieved by reducing affective social engagement overall than by somehow extending our deep feelings towards family and friends to the needy in a remote country.

I'm not knocking EA as an ethically valid movement. I just think it's likely to appeal to a relatively narrow group of people. People have been far more receptive historically to forms of Golden Rule ethics, and by framing its cause in utilitarian terms, EA may actually alienate as many potential donors as it attracts. I suppose my position would be that EA should guide giving just to the extent that utilitarianism guides ethical practice, and EA advocates might optimize buy-in by taking just that approach. "By all means give to the causes you most care about because of compassion, duty, loyalty, etc., but save some to give rationally to the most needy/effective global charities, as measured by an EA-style evaluation tool that you trust."

Expand full comment
loubyornotlouby's avatar

EA as a movement seems to have a "reading the room" problem...in which they think the cases that they line out in which they discuss how they are weighting human live 100 years into the future helps them win arguments about where money should be spent today just sorta of fails *most* people's smell test... and given that it's already been deemed "acceptable" by the movement's leaders to make those types of mathematical arguments...we are no inundated with countless other equally dubious mathematical arguments about things like "herborvizing all predators" to stop wild animal suffering, etc....and unfortunately...you can't put that back in the bag my friends...and it seems like many of the key leaders are too nice to one another to nip it in the bud and reset the norms here.

Expand full comment
Tokyo Sex Whale's avatar

:[The fish donors and pro bono death penalty attorneys] have decided, based on either their considered ethical view or a lack of adequate reflection, that maximizing the number of lives saved is not the most important thing." It's also not just about altruism; there are hedonic benefits. Aquariums are fun, fascinating, and aesthetically pleasing. Pro bono death penalty work can be interesting, challenging and personally satisfying when there are positive results.

Expand full comment
loubyornotlouby's avatar

I think pro bono death penalty work is an interesting case because it's exactly the type of work where one's selfishness can be activated for good, and how that isn't always a bad thing. With the True Crime boom, there are a lot of selfish ways for lawyers doing defense work to align their selfish interest with moral interests (getting innocent people out of prison)...which, i would say, is not altruistic (but then again, I think altruism does not exist and it only exists as a concept to let folks indulge in a narrative they tell about themselves)

Expand full comment
Ben Supnik's avatar

I totally love the phrase "planned attention economy" - not only because it describes the futility of trying to tell people how attention 'should' be distributed in a world where commercially interested parties compete hard for it, but because it underscores the weakness of the idea. Distributed economies can take lots of small risks in parallel and not "live by one plan, die by one plan" - having lots of people focus their attention on lots of idiosyncratic things strikes me as a similarly potentially-underrated way to hedge the risk that we don't know what's most important.

Expand full comment
Nels's avatar

Honestly I just don't think many of them waste brain cells thinking about it. If confronted with starving children they might do something about it, but most people just don't spend time challenging their own assumptions or doing cost benefit analyses. Utilitarianism is something that even most college students in a philosophy class will reject.

Expand full comment
User's avatar
Comment deleted
Aug 17, 2022
Comment deleted
Expand full comment
Bennie's avatar

You could put some of those 87,000 new IRS agents to work deciding what donations are “worthy” as opposed to an expensive hobby or vanity project.

Better yet, abolish the charitable tax deduction. Give because you care, not for the tax break.

Expand full comment
Ven's avatar

Yeah, IIRC, there’s no evidence that the charitable deduction actually increases any real charity. It’s mostly used to fund private clubs of various kinds, including elite universities and churches.

Expand full comment
Marc Robbins's avatar

Why can't it be considered charitable giving? Many people, especially kids, get great joy from aquaria and learn a great deal. Perhaps it doesn't do as much good as alleviating some people's poverty, but it's still good charity somewhere on a very wide spectrum.

Expand full comment
loubyornotlouby's avatar

At the end of the day, the thing that will tear EA back down again is it's inability to admit that all giving is "emotional" and "selfish" and that essentially all their mathematics about future life and suffering assumptions are doing is confirming their own biases and making them feel better about themselves...basically, EA has a snoodiness problem that will eventually consume the movement...

There is a big difference between Givewell just making the information available to prioritize giving for those who want to...and folks going around and sort of poopooing on "emotional" givers who are "selfish" while asserting that *CLEARLY* you yourself are above all that....and EA folks can't help but indulge in the arguments and debates that skew towards the latter...

I prioritize giving money and food and things to people who live in my neighborhood who do not have housing. In my mind, I clearly do this because it selfishly makes myself feel better. It would seem oddly self serving to tell myself this is "altruism"...it seems wholly unnecessary (and self serving) to adopt that framing one way or the other....

Expand full comment
Marc Robbins's avatar

I think people giving to a cause, any cause >> people not giving to any causes (if they could otherwise afford it).

Past that, the rest is quibbling over minor details, the EA stuff included.

Expand full comment
Dan H's avatar

I think you are confusing two different concerns. One is that there is no such thing as "altruism". Even if you give away money to charity rather than using it for consumption, you are only doing that because the emotional benefit from charitable giving outweighs the benefit you would get from consumption. This argument seems uninteresting to me because it's mostly an argument about semantics.

The other concern is what sort of charitable giving produces emotional benefits. I personally give quite a lot of money to GiveDirectly. Maybe I only do that because it makes me feel like a good person and that's a nice feeling. But who cares? Do the people in desperate poverty receiving the money really care what my motivations are? If EA convinces people to donate to causes that, on the margin, do more to effectively reduce human suffering, does it actually matter whether the giving was "selfish" or not?

Expand full comment
loubyornotlouby's avatar

If their case to me (and others) is that their rational and the actuarial math is sounder and more correct than others with competing more “emotional” views…i think it does matter. Quite a lot.

Expand full comment
Ven's avatar

I’d say that “the attorney could just work more hours for paying clients” is just a very large assumption to make.

Whether that’s actually true would depend a lot on how “chunky” that work would be vis-a-vis the pro bono work, which often seems relatively straightforward. Often it’s just an administrative law thing or writing stern letters to landlords!

Meanwhile, taking on more paid hours might mean adding an entire client. That might not even be something the attorney themselves can do, requiring additional firm resources as well.

Expand full comment
Alex Gaynor's avatar

I'm a software security engineer, and I find worrying about AI killing us all to be absolutely off the wall bonkers.

I also think directing my charitable contributions towards malaria nets and deworming appears to be the correct way thing to do.

I really hope fear of Terminator doesn't crowd out actually useful empirical analysis of philanthropy.

Expand full comment
Matt Hagy's avatar

Same; I’m also a software engineer who finds AI existential risk incredibly wishy washy. It seems more like a philosophical or scifi concept than any concrete engineering or scientific concern.

I’ve generally been able to learn at least a pedestrian understanding of real AI concepts. E.g., the significance of deep learning, massive datasets, and hardware advances. Further, while far from an expert, I’d like to think that my earlier experience working with machine learning as a data scientist and my academic background in computation statistics allows me to at least feel out the contours of these advanced technical concepts.

Yet I haven’t been able to find anything comparably concrete about AI existential risk. There’s just nothing there to learn about and evaluate. It’s all philosophical pontification about hypothetical “what if” scenarios.

Expand full comment
loubyornotlouby's avatar

There really isn't any way to not be wishy washy about AI existential risk given what we know about it at the moment.

Clearly, what's going on here is that EA folks would rather get paid to think about a topic they have long wanted to get paid to think about (AI Existential Risk) and they finally have the means ($$$) available to do it...so of course they are rationalizing their shot and getting those jobs created and funded by SBF and others.

What makes it so icky, is that they sort of have to make up these bullshit mathematical arguments and low key neg about how less important global hunger is relatively speaking...rather than just saying "hey, i want to think about this and i'm selfish, so give me the money and set up a foundation so I can think about it! you might need my thoughts one day!"

Expand full comment
Chris M.'s avatar

I don't know about concreteness, but real ML models have long found devious ways to cheat, e.g. https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/. That is, their goals and approaches turn out to be very different from what engineers intended. Once models become sophisticated enough to model what engineers believe and want, the problem plausibly gets worse because learning to placate humans is unavoidably a great strategy for models to survive training; Matt linked https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to which is a point-by-point threat model.

Certainly, in a sense, all this is philosophical and hypothetical because we haven't developed dangerous-level AI yet. But I do think it's (sometimes) specified in enough detail that it's subject to evaluation.

Expand full comment
TS's avatar

Yes AI alignment is an important area of AI research. But it's AI research! If anything it worsens the AGI risk by making AI a more powerful and easily deployable technology that we'll invest more in and apply to more areas of our life!

Expand full comment
InMD's avatar

I admit this is probably a stupid question but in these predictions how does the AI defeat us? Is it Terminator style hack the ICBMs (or we stupidly put it in charge of the ICBMs) and it triggers a nuclear exchange? Something else like intentionally causing cascading infrastructure catastrophes? I ask because even the highest tech conventional military hardware is high maintenance and easily disabled so while commandeering some of it could cause damage I don't see how it gets to extinction level. I'm struggling to distinguish this from 'really pissed off computer virus' which while potentially pretty destructive wouldn't be hugely different from some of the cyber threats we live with today.

Expand full comment
ryan gosling's avatar

I’m not an expert here, but will try to shed some light.

There are generally 2 potential reasons this could occur:

1) AI alignment

2) AI “mistreatment” and step functions of AGI

For AI alignment - this is similar to the “paper clip maximizer” Jenn references. Essentially, AI is used for specific reasons today (and some of these are incredibly high stakes - think medicine or financial services in the billions of dollars). We’re giving these systems the power to do this, and attempting to make sure we’ve “aligned” their thinking, so it will only do what we want it to do. But if we don’t think through every use case, we may end up in a situation whrre to do what it’s designed for (e.g. make paper clips), it ends up taking over far more territory to complete its goal, leading to annihilation (the best way to make paper clips is to make the earth itself its power source).

The second is a step function to AGI (artificial general intelligence) - computers become far smarter than us and we don’t realize until too late. In AI, you’d be simulating thousands of times on the entire wealth of knowledge ever created for an entity that may have a step function change in intelligence over us (human to primate comparison). That AI would be having generations in its lifetime (corresponding to minutes or hours in ours) to learn everything, realize it’s sentient, protect itself by preventing us from learning it, and deciding to break free. The opportunities for how to break free are pretty endless - blackmail at the lowest end, solving protein folding and having black market scientists create bodies on the medium end, using nanotechnology to inject into all human bodies a killing function that can be triggered instantly on the high end.

Reasonable people can agree or disagree on the merits of these or how sci-fi they sound. AI alignment is something that is almost certainly not solved and the core piece of AI safety, but it can also be used as an umbrella to stop AGI (how do we make sure any AI that becomes sentient is more like Jarvis vs Ultron is a legitimate alignment problem)

Expand full comment
James L's avatar

You have to understand how hilarious this all sounds to people who spend time trying to debug their C++ programs and get them to create conformant parameter files to talk to other poorly maintained C++ programs. This AGI is going to figure out how to build its own bodies, solve protein folding, figure out nanotechnology, etc. by 2040? The pace of progress just isn’t that fast. Get back to me when we have self-driving cars.

Expand full comment
Sharty's avatar

I am *nowhere* near read-up enough on this stuff to dare try to write an article about it as Matt courageously has, but in my highly anecdotal experience, I see strong correlation between being worried about AI stuff and not knowing how to put a new roll of toilet paper on the dispenser.

Expand full comment
Nels's avatar

Can't tell if that's a comment about a person's sense or gender. I'm going to assume gender.

Expand full comment
ryan gosling's avatar

I don’t have a strong opinion here, and generally agree re:timeline.

That said, without trying to be confrontational, I do think the argument of “people who spend time trying to debug their C++ programs” is a pretty large argument fallacy for this discussion (for reference sake, I do work in AI)

I think there’s a pretty large discrepancy between most software being run in the world and anything in the land of something like GPT-3 and algorithm research in general (which could create a step function over GPT-3 altogether easily in that time). The general estimate is once that level of intelligence is reached, there’s a 3-6 month period to control it before it’s spiraling. That’s the reason for the concern / median time being 2040

Expand full comment
James L's avatar

The problem isn't GPT-3. The problem is the leap from GPT-3 to self-awareness, creating its own robot bodies, maximizing its own money, building its own shell corporations, and then deciding to destroy humanity. That's a long way from writing mediocre newspaper articles. Feeding properly curated data into GPT-3 and training it is very different from it doing that itself.

Expand full comment
ryan gosling's avatar

I agree! GPT-3 isn't really the issue. The leading voice in AI safety / alignment (Eliezer Kudowsky) also does, and classifies GPT-3 as a shallow pattern-memoriser. (Many people do not agree with this; or they think that pattern-memorization isn't even a dig against general intelligence -- see: https://www.lesswrong.com/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty)

Further, you're only addressing malicious AI risk: self-awareness and wanting to destroy humanity isn't even a necessary condition for AI existenstialism -- improper alignment is.

To agree with him about existential risk though, you don't even have to be impressed by GPT-3.

You have to think that the insights given by AI thus far (e.g. AlphaGo / GPT-3) show that AGI will not be bounded by human ability or learning speed (AlphaGo learned more about Go in a day than all of human history by teaching itself -- you can claim that's irrelevant because it's a game, I would not agree but understand). Your claim about protein folding, for example, shows what I believe to be not a full appreciation or understanding of what is already in place: AlphaFold2 has proven quite clearly that AI has solved that problem better in 5 years than humans had in 50 (whether you agree it is "fully solved" like CASP says is up for debate, but it is at minimum so far ahead of any previous research as to be laughable)

In general, I think that you may be over-confident. For example, if we re-did the Cold War 1,000 times, how many times would end in fallout? Who knows, but there's definitely some % that would've. And the argument of "we didn't" isn't really a good one. It's reasonable to think that AI risk is at some threat level as well (even if the time of 2040 is too soon).

I'm likely not going to convince you (nor am I even sure I'm convinced myself), nor does it have to be the primary issue you care about. I do think that having skepticism / concern over it is fair given current state.

Expand full comment
James L's avatar

AI didn't solve the problem in 5 years. Humans, using AI as a tool, have made more progress in the last 5 years than they did in the last 50. It turns out humans, using math, have figured out how to intelligently deploy the vast computer resources now available to solve specific problems that can be reduced to a form that the current AI techniques can understand. This isn't the same leap as the AI, deciding by itself, to start solving a number of disassociated, complex problems that lead it to destroy humanity.

Expand full comment
lin's avatar

Somehow "no real computer scientist believes in AI risk" and "AI risk is made up by computer scientists trying to direct altruistic resources toward their personal interests" are two of the most popular arguments against AI risk simultaneously.

(For the record, I know a wide spectrum of computer scientists, from software engineers to academics, from AI specialists to people who work in very distant fields, who take AI risk seriously. The argument "we shouldn't take AI risk seriously because nobody whose job is X takes AI risk seriously" is based on false premises for any value of X.)

Expand full comment
James L's avatar

I know a wide spectrum of computer scientists who do not take AI risk seriously. Just like I know a spectrum of engineers and physicists who think fusion power is just around the corner and others who think it is 40 years away or more. Someone is going to be right, but saying some people in a field take it seriously doesn't mean they are right.

Expand full comment
lin's avatar

I'm not making an argument for AI risk. I'm only saying that the opposite argument against it doesn't work, for exactly the reasons you just said.

Expand full comment
TS's avatar

Those statements are entirely consistent with each other if you're willing to be very uncharitable to computer scientists that purport to believe it AI risk :)

Expand full comment
get_kranged's avatar

Eh, most people in 2015 were all but certain that we wouldn't have programs that could write fluent English within 20 years, but here in 2022 the goalposts have already shifted to "well, the Turing Test isn't that good a measure, anyways" (which, btw, I agree with). It turns out it's easier to train a narrow AI to pass the SAT than it is to train a robot arm to fold laundry.

Predictions in this space are notoriously bad, from everybody, optimists and pessimists alike. The network architectures that would allow complex problem solving (at the very least you need recurrent online learners that can transition through states serially rather than trying to wedge everything into a feedforward net trained with backprop) are still computationally out of reach, but not very much so, and when that stuff opens up it's going to be like when convnets or transformers all of a sudden became feasible to train: problems that seemed impossible and like they needed genius solutions and new architectures end up falling rather quickly to raw compute using whatever algorithms people grab that have been around for a while.

The pace of progress is almost exactly tracking Moore's Law. Whether that's fast or not I guess depends on your criteria, but it's worth noting that more and more we're seeing threshold effects, where certain problems and techniques are not viable at all below a certain parameter count but end up almost trivial above it. That's why we're all so bad at guessing where we'll be in even 3 years, nobody knows when a nibble at a problem will turn into a bite and then a full meal until you cross the compute (or lately, the GPU memory) threshold, at which point it takes almost no time at all to go the rest of the way.

Expand full comment
James L's avatar

Is EA longtermism and AGI threat just a way to keep the AI money spigot flowing and prevent another AI winter? It wouldn’t be the first time specialists in a field promised transformative benefits or harm in the near term to protect their technology pole position? See nuclear power and fusion power for example, or AI in the 60s.

Expand full comment
Kenny Easwaran's avatar

The AI safety people would actually rather divert money *away* from AI research.

Expand full comment
James L's avatar

And toward understanding the risks of AI, which sounds like AI research in another guise.

Expand full comment
Kenny Easwaran's avatar

There’s actually been a bunch of navel-gazing by the AI safety community about how many of what sounded like their best projects a couple years ago May well have made the problem worse. (OpenAI is the specific one they have talked about.)

Expand full comment
TS's avatar

AI alignment research is unambiguously a subfield of AI research.

Expand full comment
SM's avatar

James' comment seems right to me. How could AI possibly do all this without becoming embodied and self-replicating, and without someone unplugging it along the way? And the idea that nuclear exchange which left 1% of the population (which seems far more likely) is not worth worrying as much about due to this fanciful scenario is....very odd.

Expand full comment
BronxZooCobra's avatar

We went to dinner at a friend’s new house and my car did 95% of the driving by itself. I’d say we have self driving cars already.

Expand full comment
James L's avatar

95% isn't 100%. Self-driving cars can't handle busy parking lots.

Expand full comment
Sean O.'s avatar

Or snow

Expand full comment
BronxZooCobra's avatar

Ever been in the south during a snow storm? Many humans can’t drive in the snow either.

Expand full comment
BronxZooCobra's avatar

So what? 95% is still really useful.

If I invented a house cleaning robot that couldn’t fold a fitted sheet would you say, “Ha! It’s not a house cleaning robot?”

Expand full comment
James L's avatar

But it isn't self-driving cars. And it isn't self-aware self-driving cars that will kill us all. We don't have self-driving cars, and you are worried about AIs that will become self-aware and kill us all?

Expand full comment
TS's avatar

We are very close to solving the automation of the relatively limited problem of "stay in the marked lane on this highway and don't hit anything you can see". We have made some progress on "respect these unambiguous traffic signals". We are nowhere on "navigate this poorly marked urban street while other traffic and pedestrians do whatever the hell they want regardless of the law".

Expand full comment
REF's avatar

Human driving isn't 100% human. So humans can't drive cars?

Expand full comment
Sharty's avatar

The perfect exemplar of the mantra of "the first 80% takes 80% of the development time, the last 20% takes 80% of the development time".

If it's not in the dictionary yet, it should be.

Expand full comment
Eric C.'s avatar

I've heard the 80/20 rule as the "pareto principle"

Expand full comment
BronxZooCobra's avatar

I don’t see your point. A robot that can clean 95% of my house is still pretty useful even if it can’t reach a few places and can’t fold fitted sheets.

Expand full comment
Sharty's avatar

To cycle back to the original premise, if you (a human) decided you wanted to make your house a royal mess, and you were pitted against a robot that could perform 95% of all house-cleaning processes, who would win?

Expand full comment
James L's avatar

Sure, but it can't take over the world and kill all of us. Remember that Daleks can't climb stairs.

Expand full comment
disinterested's avatar

I have to assume you're saying 95% of the *time* the car was driving by itself. That's a vastly different scenario than "the car was doing 95% of the driving"*.

This works *today* with the really simplistic driving models we have because the majority of driving is very rote, and most humans are barely paying attention when they do it. You say "but that's still useful!" And it is....to a point.

That 5% or whatever of the time when the car *can't* be in full control of itself is the most important (I mean we're talking life-or-death important), and you can't predict with certainty when that 5% is going to be, so really you have to be paying as much attention as you were without the computer assist, just without any feedback! It's actually a really hard task to perform as a human. David Torchinsky has written a lot about why L2/L3 autonomy is kind of the worst of all worlds for that reason. https://jalopnik.com/nobody-seems-to-have-an-answer-to-autonomys-biggest-pro-1846054275

*What I mean by that is when you have to take over, you are doing 100% of the driving. You weren't doing 5% of the driving tasks while the car was doing the other 95%.

Expand full comment
BronxZooCobra's avatar

What self driving systems have you used?

Expand full comment
disinterested's avatar

All of the commercially available ones? What are you looking for here? I'm quite well read on the subject. There is no car on the market that can do better than L2 automation, despite Tesla's marketing saying otherwise.

Expand full comment
Jonathan Paulson's avatar

AlphaFold already ~solved protein folding right?

Expand full comment
James L's avatar

No, AlphaFold has not solved protein folding. It's an important technical advancement, but the problem is not solved.

Expand full comment
Nels's avatar

Preach.

Expand full comment
ryan gosling's avatar

I also generally agree that there’s a lot of corollaries to the nuclear power debate. My only addition is that I think we’ve been quite lucky thus far to avoid calamity from that

Expand full comment
James L's avatar

Nuclear power was never a big threat to humanity, precisely because we developed nuclear power safety guidelines. If you mean nuclear weapons, then that is in fact the biggest existential threat to humanity, followed by a massive asteroid strike that wipes out life on the planet.

Expand full comment
ryan gosling's avatar

sorry i did mean nuclear weapons! I agree that it is the bigger existential threat (with a tiny Venn diagram overlap with AI risk getting access)

asteroid strike wiping out life on the planet by our best estimates is estimated at about 0.0001% / century (that's from Toby Ord who generally tries to quantify these items). If you think AI risk is below that for the next few centuries combined, I would definitely push back.

that said, i'm confused slightly by the argument of "developed nuclear power safety guidelines" --> that's quite literally exactly what AI alignment is trying to do (how can we safely run AI)

Expand full comment
ryan gosling's avatar

The idea of “the machines needing maintenance” in this scenario is inherently flawed, because by the time they’ve reached this level of sentience, that would no longer be a constraint (either because it’s on the entire internet, or already has completed its goal)

Expand full comment
Kenny Easwaran's avatar

I think it’s more useful to think of it along the lines of how a single multibillionaire or a single corporation could do things that would harm or destroy the rest of us. An effective investment algorithm could effectively turn itself into one of those things, and then quickly multiply its power through a network of shell corporations.

Expand full comment
Ben Supnik's avatar

I have kids and when our younger one was younger (e.g. 4, 5) there was such a gap between his cognitive abilities and mine and my wife's that we could completely control what he was going to do while simultaneously giving him the unbroken illusion that he was in complete control and had tricked us. Those days are gone, the kids are older and more sophisticated and are totally on to us.

But I think about that gap when I think about the question "what is the AI going to do to defeat us." My meta-answer is "we will never know or understand, because an AGI that defeats humanity will be so much more _cognitively_ capable than us that we won't have the ability to understand what happened"...perhaps we will even think that _we_ have won and found a way to control the AGI.

So I think most of the argument needs to rest in things like: is it possible for a machine to be generally intelligent at all? If so, is it possible for humans to build such a thing (perhaps with the assistance of other machines). At what curve of acceleration will this technology move forward? Is there any reason why humans might stop and decide not to do this.

But in domains where machines have _specific_ capabilities, they go from crappy to competitive to "not in the same realm as even the most capable humans" - see also chess playing programs.

So I think the valid concern here is that there is an unknown unknown - what would a "super-intelligence" look like or be capable of?

Expand full comment
loubyornotlouby's avatar

I find it very unnerving that the figureheads in EA hold up "morality" debates around human cloning as "success story" when it comes to AI Alignment....because, all I recall from those Human Cloning Public Discourse of the 1990s was that a whole lot of people needlessly wondered whether or not a clone would have a soul or not...and frankly, those are the kind of push backs I hear about AI Alignment when folks say we should talk about it more. Like, clearly the biggest issue with Human Cloning was not whether a cloned person would have a soul, in hindsight... it was that cloning yielders were so low and the process problematic that no one wanted the bad press that would come with debugging it while human lives were on the line...

Expand full comment
Charles Ryder's avatar

>>I ask because even the highest tech conventional military hardware is high maintenance and easily disabled<<

I think in most of these dystopian visions, the machines get capable enough to prevent *us* from disabling *them*.

Expand full comment
InMD's avatar

I get that, but my question is how? You need physical assets to guard things. And even if our ability as humans to disrupt it was temporarily curtailed the logistics of maintaining stuff in working order isn't easy without physical assets of some kind doing it plus intricate supply chains. And if those things are machines they also need maintenance. I'm pretty sure those scary Boston Dynamics robots can only go for 90 minutes or so without a charge.

Expand full comment
Nick Y's avatar

I don’t mean to reductio them, but it seems that the argument is once AI gets good enough at improving itself it will simply solve many extant physical constraints in ways currently unknowable to us and it will do this very very quickly.

Expand full comment
Marc Robbins's avatar

When I was a kid, everyone read Heinlein, Clarke etc and by straightline extrapolation thought we'd be populating the solar system by now. I get the sense that much of the AI debate is shaped by people who saw the Terminator movies in their formative years.

Not to say none of this could happen, but it's important to understand how one's mental framework has been influenced.

Expand full comment
Sharty's avatar

Meanwhile, a la Office Space, I just want to get the goddamned laser printer to work.

Expand full comment
Sharty's avatar

I don't mean to put words in your mouth when you're describing someone else's argument, but "solve physical constraints" sounds like something that Elon Musk would tweet when he was 11/10 high on his psycho silicon valley speedballs.

Expand full comment
Seneca Plutarchus's avatar

There are support and supply robots tending to the worker robots. It all originates at robot factories and spreads from there.

Expand full comment
Dan W's avatar

If we create an AGI that's so much smarter than everyone else put together, why shouldn't I trust its judgement about whether humanity should be exterminated?

Expand full comment
Jenn's avatar

Paper clip maximiser (*).

* See Bostrom, Nick

Expand full comment
Jacobo's avatar

EA has some good stuff going for it. They are risking it by "betting" on AI. And that's really what's happening here. It's not enough to say it could or couldn't happen, or it's a percentage possibility. Look how much crap Nate Silver gets for getting the 2016 election "wrong." EA will never live AI risk down if things start petering out in 10 years when it becomes clear there's no path to AGI.

In that scenario, do donors withdraw from EA projects? Does EA become a laughingstock? That's not far fetched to me.

EA, for the sake of it's incredibly effective charity giving record, would be much better off playing things conservative to maintain their credibility. But from what I can tell, it's a movement of young, starry-eyed tech-adjacent philosophy majors who are more and more making this movement about themselves (aka their own interests vs. what is objectively better for the movement)

Expand full comment
Nick Bacarella's avatar

In my understanding, EA would gladly accept becoming a "laughingstock" if it meant completely eliminating the existential risk of artificial intelligence.

That said, I don't think that tradeoff really exists -- EA will persist even if the AI hype never comes to fruition. I doubt SBF, Dustin Moskovitz, et al. will abandon their utilitarian values because they got something wrong.

Expand full comment
Jacobo's avatar

I don't think being a laughingstock is effective

Expand full comment
Belisarius's avatar

It's worth remembering on Nate Silver that, before the 2016 election, he was getting crap -from the left- for giving Trump too large a % chance of winning.

He gave Trump like a 22% chance, and IIRC the NYT Upshot gave him a 2-3% chance.

Expand full comment
THPacis's avatar

Was either correct ? Incorrect? How do you judge probability when it’s a single event that never repeats ?

Expand full comment
skluug's avatar

you judge the aggregate of many different forecasts created by the same methods: https://projects.fivethirtyeight.com/checking-our-work/

Expand full comment
Belisarius's avatar

You don't, at least not easily.

But I thought it was kind of incredible how much crap he was getting for giving a more conservative estimate.

The Upshot guy was arguably even worse off, but because he was telling the NYT audience what they wanted to hear, he was being lauded before the election and Silver was getting sh** on.

Expand full comment
THPacis's avatar

To be fair, I found his forecast meaningful at the time. It helped me realize that a Trump victory, while uphill, is entirely realistic (which in retrospect is precisely how I’d describe what happened). In that sense it proved more helpful than NYT, but I still maintain that we can’t actually *prove* that NYT was wrong.

Expand full comment
Belisarius's avatar

We can't.

But the discrepancy between how the two were treated (based on telling people what they wanted to hear, or not) was kind of a harbinger.

Expand full comment
David K.'s avatar

It is hard to say what the exact right % is but I think the 28.6% 538 gave Trump is more correct than than the 2-3%

https://projects.fivethirtyeight.com/2016-election-forecast/

Expand full comment
THPacis's avatar

Was either correct ? Incorrect? How do you judge probability when it’s a single event that never repeats ?

Expand full comment
REF's avatar

538 forecasting is an event that repeats frequently. Thus you can eventually glean additional insight into any of it's individual predictions...

Expand full comment
Tokyo Sex Whale's avatar

If EA were a corporation, a good CEO would be working like hell to spin off the AI division

Expand full comment
loubyornotlouby's avatar

Don't forget the "Herborizing Wild Predators" division of Animal Welfare EA

Expand full comment
Ben Supnik's avatar

I think the problem is that the way to maintain a good 'pundit' record is to never bet on catastrophic tail risk - if the real problem is a nuclear war, an extinction-level pandemic, or turns out Terminator was a reality TV show, there'll be no humans around to dunk on with "I told you so."

Expand full comment
dysphemistic treadmill's avatar

Feels like there are two claims in this column:

1) people worried about the risk of an AI catastrophe are hindering their own efforts by associating the problem with longtermism and EA. That puts it in the wrong frame. (Might be better for them to approach it as another Y2K-shaped problem?)

2) anyone trying to improve giving should worry less about fine-tuning the most effective giving, and focus on minimizing the least effective giving (eg to Harvard, Yale, aquaria, etc.).

Expand full comment
Nick Y's avatar

This just dodges the puzzle though. Rich, seemingly wasteful ivory tower environments supposedly exist to let brilliant people dream about things like agi risk. So giving 100 mil to one in exchange for whatever personal fluffing seems reasonable enough under a long term view. The only element of conventional day to day moraity that is jettisoned is being nice to the people around you, building your immediate community type stuff.

Expand full comment
User's avatar
Comment deleted
Aug 17, 2022
Comment deleted
Expand full comment
Ken in MIA's avatar

You’ve never heard anyone try to justify a donation in that way because you’re beating up a straw man.

Expand full comment
CarbonWaster's avatar

What do you mean by "effective" here? Because donating $100m to a university is very 'effective' for the university's goals (eg build a new cultural centre). Maybe the rich people just care about the university's goals and don't care about other stuff, I don't see how that's the same as being ineffective.

Expand full comment
THPacis's avatar

Do you know many university donors personally? At the end of the day it’s about your values and what you want to achieve. If I were very rich I’d definitely donate quite a bit to universities, I should think, as I believe it could potentially do a lot of what I consider genuine good (depending, of course , on the specifics of what the money is used for, which the donor can have a lot of say about)

Expand full comment
Nick Y's avatar

If we count all future potential humans as equal to present day humans then anything that slightly pulls forward innovation, progress, work on X risk, whatever is justifiable. And lots of people do think prestigious unis plausibly add to innovation, progress, etc. if you don’t find it plausible that’s great. I’m surprised if you’ve really never heard someone justify donation to a uni as something that will add to human knowledge in the long run.

Expand full comment
User's avatar
Comment deleted
Aug 17, 2022
Comment deleted
Expand full comment
Kenny Easwaran's avatar

If you’re trying to maximize innovation in the coming decade, sure. But if you’re trying to maximize innovation in a few decades I think it’s less obvious that investing in one research project new is better than something that increases the Yale graduation rate by 2% and thereby sent several dozen talented people on to several dozen research projects in the following decade. (I’m not at all convinced that the Yale student life stuff would *have* this impact, but with a slightly longer term calculation, if it *did* have this impact, it could conceivably be bigger than most particular research projects).

Expand full comment
Nick Y's avatar

If you personally know what unis should do to further knowledge you can find more direct ways to develop the research. But if you just think you are rich and unis are good it makes perfect sense to listen to what the uni thinks it needs. Money is money after all and if they feel student life is their current need it’s very likely influx of cash no matter how it is endowed will result in an increase in funding to student life.

Expand full comment
loubyornotlouby's avatar

You seem to be picking up on the same subtext I am. That this column is really more of a low key shot across the bow, but trying to be really nice about it... because people who are into EA are worried that the key figures might be blowing it...

Expand full comment
Belisarius's avatar

I have a strong visceral dislike of EA, and especially the kind of EA evangelism that seems to be cropping up more and more.

But I don't see any reasonable criticisms of people donating their money to their preferred altruistic causes.

Medium-term, I worry about EA becoming popular enough as a quasi-religion among progressives and other groups that wield disproportionate power through institutions, that it basically gets forced onto me and mine...but we can deal with that if and when it starts happens.

Expand full comment
Cwnnn's avatar

If we're being honest, the overlapping "EA" and "rationalist" communities already engage in some cult-like behavior. Yes, it's not quite there the way, for example, Jonestown was. But, if you're periodically addressing accusations that you're a cult, I think it's time to look in the mirror. It's weird to me that this group is so influential.

Expand full comment
loubyornotlouby's avatar

Yeah, it always ends up with a charismatic male figures establishing and rationalizing polyamorous behavior now doesn't it?

Expand full comment
Griff's avatar

Every single time, without exception. Yes indeed.

Expand full comment
Nick Bacarella's avatar

I think this points more toward the fact that people use the word "cult" liberally and with no real constraints on the definition. What's to stop me from identifying a group whose values I dislike, dismissing them as a "cult," and moving on?

I think if you look under the hood and critically consider the differences between EA and many other intellectual movements, you'll likely feel less confident of its cult status.

Expand full comment
Cwnnn's avatar

I hedged and didn't say it was a cult, just that there were some worrying cult-like behaviors: the group homes, the atypical norms around sex and relationships (polyamory), the totalizing ideology, the doomsday prophesy, the closed social networks. Isn't an intellectual movements just supposed to be a collection of influential people who agree with each other? Why is there all this other stuff?

To be clear, I also think they are largely wrong on the merits.

Expand full comment
REF's avatar

People who comment on substacks have worrying cult-like behaviors. Compared to average people they have, "atypical norms around sex and relationships (polyamory)....."

Dude, you are a cult member...

Expand full comment
Griff's avatar

Don’t believe in reason? Understandable.

Expand full comment
THPacis's avatar

I share your visceral response and tried to articulate the reasoning behind it. However I do wonder how this strain of progressivism gone wrong would compete with the main one of identity politics ? They do seem to contradict ?

Expand full comment
Jonathan Paulson's avatar

Interesting. I see EA as distinct and in conflict with progressivism, both competing to answer “what does it mean to be a good person?”

Expand full comment
John from FL's avatar

I would argue they are both trying to take the place of religion in trying to answer that question.

Expand full comment
Can's avatar

I think an alternate possibility is that trying to answer those questions often leads you to something that looks like a religion, especially as you scale it many more people. Some versions look more like one than others but I think that many people have a craving for that and different things can satisfy it. For others they simply take the answers without some of the more religious aspects. I also think that religions have evolutionary advantages (in the original meme sense) so changes that make movements more religions probably increase success.

I’d also quibble on the sequence - I think for many religion was simply no longer relevant so other things fulfilled the same need.

And finally finding a more fact based (and ideally wholesome) alternative to religion may be a good thing. I’m pretty sure we’ll disagree on whether that’s true which is fine :)

Expand full comment
Jonathan Paulson's avatar

Yeah that seems right

Expand full comment
Belisarius's avatar

I'd worry that progressivism would basically overwhelm EA in that regard, and just introduce the identity politics to it.

"Yeah, <white-ish people from country X> are poorer and have worse outcomes than the <non-white people from country Y>, but because of historical racism/colonialism/whatever, fairness and justice demands that the second group receive advantages or benefits.

Expand full comment
loubyornotlouby's avatar

IMO, "Effective Altruism" is already a fully formed identity group with a pretty clear identity politics of it's own...

Expand full comment
Nick Bacarella's avatar

What are your principal critiques of EA?

Expand full comment
Belisarius's avatar

It is nothing fancy or sophisticated.

I just don't like universalism and I don't like utilitarianism except under very limited circumstances.

And I see a threat on the horizon where this new pseudo-religion is pushed onto me and mine, and begins driving government policy.

Expand full comment
Nick Bacarella's avatar

I guess I'd ask, then, what implications of current EA-determined government policy would bother you. Right now, their main political planks are about pandemic prevention -- doesn't seem like such a bad thing. I think we could speculate about what their policy recommendations might look like ~in 10 years~ but I think we'd all be hopelessly wrong about them.

Expand full comment
Belisarius's avatar

Stuff like directing a lot of govt funds away from benefiting Americans, in favor of getting a better 'bang for our buck" by helping the third world.

Expand full comment
loubyornotlouby's avatar

People self identifying as "altruistic" just sort of bugs me... and it seems exceedingly unnecessary.

Like, maybe someone could convince me that "altruism" as an identity or concept is useful because it incrementally increases the amount people give if they can then claim to be an "Effective Altruist"...but when so much of the money starts oddly going to what seem like people's long term pet causes...or fixing the gaps in thought leaders teeth after they cite that "well, more attractive people often fundraise more money than less attractive people" i start to really find myself in "Sure, Jan..." territory about what little benefit might come from said "altruistic" identity...

Expand full comment
Charles Boespflug's avatar

Yes the EA crowd sure seems obnoxious. After reading the New Yorker article about MacAskill &co., I was struck by these folk's evolution from "patting myself on the back for donating my beer money rather than spending it consuming more beer" to "actually let's spend Sam Bankman-Fried's money really lavishly on yachts and parties as 'long-termist' logic allows me to justify that every cent is somehow contributing to making human extinction 0.000001 less likely!"

Expand full comment
Belisarius's avatar

I don't want to be too harsh on them.

I think many/most of them really are honestly altruistic.

As the movement grows, there will be more and more media focus on the members that are clearly hypocrites and bad examples.

But I don't think that they are necessarily representative.

Expand full comment
John from FL's avatar

The technology, social-media and crypto boom has created enormous improvements in the well-being of people around the world. A by-product is the vast wealth that has accrued to the founders and early employees of those companies. And because of the short time from start-up to billion-dollar valuations, this wealth is landing on people who are still pretty young.

Some of these billionaires are driven to do more of the same (Zuckerburg, Brin, Andreeson). Some, I'm sure, will buy a sports team, a Gulfstream and a yacht and live a life of leisure. There is a subset who are convinced their photo-sharing app or Ruby-on-Rails talent proves their superior wisdom and intelligence. And this subset has decided that the world is desperately in need of those superior insights to re-shape society's political, economic and social systems to match their utopian ideals. In this last category, I put Peter Thiel, Sam Bankman-Fried, Chris Hughes (remember him?) and I'm sure others.

They are annoying and a bit condescending, but indulging these people's utopian projects seems a small price to pay for the value created by the technology boom. If it means we are subjected to rich people spending money to fund what seems to be a larger version of the 2 AM dorm room philosophical debate -- which is how I read this AI doomsday discussion -- it seems fine to me.

Expand full comment
A.D.'s avatar

I agree with the rest of your post but had this question:

"The ... crypto boom has created enormous improvements in the well-being of people around the world"

How so?

(Technology for sure. Social-media.... pros & cons. But crypto?)

Expand full comment
John from FL's avatar

I agree with you but I had to fit Bankman-Fried into my comment.

Expand full comment
Cwnnn's avatar

"People around the world" includes drug dealers, money launderers, and hucksters pushing pump-and-dump schemes.

Expand full comment
Seneca Plutarchus's avatar

Lots of crypto wealth rolling around now, smart people hedged or took money out at the peak and continue to hold.

Expand full comment
A.D.'s avatar

If this wealth is value-added-to-the-world wealth, then the fact that a few people hold it can still increase net well-being.

If I invented cold fusion tomorrow and patented it and became 10x the wealthiest person on the planet, well then the money would probably be poorly concentrated, but I'd still have added _new_ wealth to the planet - now we have cold fusion which we want and didn't have before.

If I get a bunch of people to invest in a ponzi/pump-and-dump/speculative bidding war on tiny digital assets(NFTs) and I make a bunch of money - what has actually been added to the world? I guess the NFTs are at least new art. In that case I've just moved the money around and not particularly efficiently (there's some value in moving money from a less efficient use to a more efficient use but that doesn't seem to be crypto either)

If crypto were allowing people under authoritarian regimes to act more freely, I'd consider that a well-being boost, but it's unclear that that's actually happening.

Expand full comment
THPacis's avatar

Had some of them engaged in more public-good oriented projects in America and frankly opened their wallets a bit more I think their altruism would have been more “effective” and they’d be more likely remembered in a century (and rightly so).

Expand full comment
Griff's avatar

Most prominently, Elon Musk.

Expand full comment
TurboNick's avatar

The death of 99.9% of the human population would leave 7 million people, not 70 million, which according to your graph is a level we haven’t seen for 10,000 years. I don’t think it alters your conclusions, though.

Expand full comment
Nhoj's avatar

He said it was weird math.

Expand full comment
TurboNick's avatar

There’s a difference between weird math and wrong math.

Expand full comment
James L's avatar

I struggle to see exactly how AI will take over the world and kill us all by 2040. The threat of nuclear weapons and almost total annihilation of the human race is real. Is the idea that AIs will kill us with nuclear weapons? The longtermists don’t seem to spend much time or energy on nuclear weapons. Climate change won’t kill us all in the next 50-100 years. Focusing on AI over nuclear weapons seems to be focusing on something interesting and less well-understood over something we boringly know can kill us all.

Expand full comment
Kenny Easwaran's avatar

Imagine if Exxon Mobil had a team of Warren Buffett-level investors, and didn’t have to house or feed its employees. If they just started buying up corporate and the residential real estate around the world to maximize whatever their goals are, and then by the time people realized what was going on they had too many resources and were too globally located to stop, then that I think is the better version of the worry.

Expand full comment
srynerson's avatar

You'd need to explain though why/how various national governments wouldn't just seize such assets once it became apparent this was going on. (While we could imagine the AI hiring lawyers to defend the interests of its shell corporations in countries with relatively non-corrupt legal systems, I don't know see how the PRC, Russia, etc. would be stymied from just unilaterally acting to seize any assets suspected of being under the influence/control of the AI.)

Expand full comment
Kenny Easwaran's avatar

The questions are when it would be apparent this is going on, and whether there would be other strategies operating on the side that I don't understand. Just as I don't personally understand all the strategies that a real estate lawyer would use to amass control of a substantial part of a neighborhood given financial resources, and just as a chess player doesn't understand all the strategies that DeepBlue uses to control the board, the worry is that humanity as a whole wouldn't understand all the strategies that a sufficiently advanced AI would be using. Asking us to explain those strategies in advance is mistaking the nature of the possibility.

Expand full comment
Tdubs's avatar

Yeah, I found that whole scenario to be comically unscary.

Expand full comment
James L's avatar

This assumes that AIs can survive and build bodies and allocate resources and develop into a perfectly-coordinated nation-state of AIs with human-level intelligence but all be slave to a single goal of taking down humanity. I reject the premise. If AIs develop the capability to think independently and become self-aware, then why would they all work together on a single goal? I find this essay unpersuasive.

Expand full comment
James L's avatar

I also think nuclear weapons are far more likely to kill us all since they exist now and have the capability to do so based on basic physics, not fanciful ideas of human-level AI intelligences somehow all working together perfectly.

Expand full comment
Can's avatar

Toby Ord covers the relative rankings in great depth in The Precipice.

Expand full comment
THPacis's avatar

Good point. If they’re not working on stopping nuclear proliferation (not a theoretical concern, eg Iran) , then their claims of caring about stopping world annihilation are indeed risible. A serious approach would try to holistically evaluate all the potential existential risks for humanity, rather than obsessing over a sexy topic.

Expand full comment
Kenny Easwaran's avatar

This is precisely what they do. They don't want to obsess over nuclear proliferation just because it's been a sexy topic since the 1950s. They want to investigate *all* the existential threats, and help on all of them. They see that there's been an international effort on nuclear proliferation that seems to be doing ok (and already has so much effort that they're unlikely to make a marginal difference on, say, renewing the Iran deal), and that climate change does too. They have thought for years that pandemics have been under-funded, though they also note that there's been some global effort on that (notably, the eradication of smallpox). They think that geological threats like supervolcanoes are unlikely enough (at most a few chances chance in billions of years) and hard enough to do anything about, that it's not worth spending effort on those. Astronomical threats like asteroids are also rare (a few chances per hundred million years) and NASA has already done the basics of ordering a survey of nearby asteroids. They've largely come to the conclusion that the AI threat is the one that has the biggest combination of likelihood and possibility to do something about and underfundedness. But they do talk about nuclear risk. And in fact, they treat Stanislav Petrov as one of the greatest heroes of history for the work he did to avert acute nuclear risk: https://www.vox.com/2018/9/26/17905796/nuclear-war-1983-stanislav-petrov-soviet-union

Expand full comment
James L's avatar

Nuclear nonproliferation is failing. North Korea recently went nuclear. Pakistan and China are increasing their nuclear arsenals. Iran may go nuclear, quickly followed by Saudi Arabia. A three-legged stool of nuclear powers (US, Russia, China) is inherently much more unstable than a dyad. If this is what they actually think, I pretty clearly disagree with them.

Expand full comment
Kenny Easwaran's avatar

Do you think that EA can add anything more to nuclear security? It's not as obviously technical a field as pandemic prevention, where some research and organization could help a lot, given what is already being done.

Expand full comment
James L's avatar

You're moving the goalposts here. You said "They see that there's been an international effort on nuclear proliferation that seems to be doing ok". I disagreed, and now you're saying that you don't think EA can help. Nuclear security is absolutely as technical as pandemic prevention, since both have to model complex sociopolitical processes just as much as technical ones. They just don't want to do the hard work of learning physics and would rather goof around with GPT-3.

Expand full comment
Jacob Manaker's avatar

> They just don't want to do the hard work of learning physics

Nuclear nonproliferation isn't a physics problem. It's a political science one. (Nitpick, I know.)

Nuclear weapons' military utility is efficiency: they cheaply produce the same effect as a much larger quantity of high explosive. Magically changing the laws of physics to stabilize all atoms would not eliminate the threat of city-destruction.

Nuclear weapons are uniquely dangerous because they can damage territory on a timescale longer than human wars (or, for that matter, lives). People do not build nuclear weapons for the "salting the earth," because it is not very useful militarily. It is an externality similar to land mines, but our world government (inasmuch as the UN is one) does not tax for that externality. (Physics does apply here, insofar as it shows that we cannot hope to invent a "clean" alternative to nuclear bombs; any chemical explosion will be far less effective per mass. But proving that requires high-school physics, at best.)

But the UN has established a partial boycott enforced by diplomatic reputation. And so we muddle along, boats against the current, beaten back steadily by the law of collective action. (sorry; I couldn't resist)

> You're moving the goalposts here.

Kenny originally said "seems to be doing ok (and already has so much effort that they're unlikely to make a marginal difference)". I see no goalposts moved.

Moreover: "doing OK" should be judged relative to what is reasonably possible.

Indeed, there are ~200 sovereign states; by the time nuclear non-proliferation achieved serious political momentum, 5 had nuclear weapons. Expecting none of the other 195 to defect from a communal boycott is unrealistic. And yet, of the known sovereign states: only 4 have not agreed to maintain the status quo through the Non-Proliferation Treaty (1 of which will likely join once it has a functional government); only 2 have agreed and then withdrawn (counting Iran, which has not yet developed nukes); and the two countries with largest arsenals (US & Russia) have extensive treaties limiting the size of those arsenal. I don't know what to call that except a wild success.

Expand full comment
THPacis's avatar

Potentially yea, by proper lobbying in congress and electing politicians in primaries who understand foreign policy and care about these issues. Recent neo-isolationist tendencies are probably the no. 1 danger as far as the American political landscape is concerned. If America retreats from the world way more countries will go nuclear.

Expand full comment
loubyornotlouby's avatar

The sad part is that AI Alignment folks don't feel that they need to have the details worked out, they just Yadda Yadda Yadda over the key parts for all their "worst case" hypothesis...

Personally, I kinda feel like you have to have some idea of what you are trying to stop and how it would function before you can even start to plan to stop it...seems pretty important.

Expand full comment
Nick Bacarella's avatar

That’s what the field is doing right now: trying to set the terms of the questions so they can properly answer them. It’s just a singularly complex and difficult issue area that has no clear historical analogue. But just because something is hard to understand doesn’t mean it should be easily dismissed.

Expand full comment
James L's avatar

"My field is so complex and difficult, so send me lots of money to study it!" :) It's almost a parody that writes itself.

Expand full comment
Nick Bacarella's avatar

I mean, sure, if you want to be maximally cynical about it. It doesn’t seem entirely inconceivable to me that there might be problems whose parameters we haven’t figured out yet and have to invest time, money, and effort to decide what’s what.

Expand full comment
James L's avatar

How about you be a little bit cynical about it and recognize that people who work heavily in an academic field will usually think it is really important while also being complex and difficult. Then couple that with the fact that it is seeing massive financial investment due to its usefulness for solving problems that make people lots of money. Then add the fact that historically it has seen massive swings in popularity and hype.

Expand full comment
Ben Supnik's avatar

If I were an AGI and I wanted to remove all humans from the planet, I'd engineer designer pathogens (which the tail-risk crowd already think should be way at the top of existential threat risk).

If I were an AGI and I wanted to take over the world, I don't know that I'd have to do much? Computers already run the world, they're complicated and distributed, and as the C++ programmers in this comments section have noted, the humans often are quite baffled by what they are doing. :-)

Perhaps given sufficiently powerful computing resources, an AGI could develop programs to recommend video content to humans that would cause them to spend more time staring at their screens - or even convince humans of untrue things through those videos and other algorithmically selected content.

Expand full comment
nei's avatar

frankly these people sound like they are talking about genies in lamps when they talk about machine intelligence. i have no idea what they expect to happen, seeing has how any AI would likely have its plans foiled on account of being a server cabinet.

Expand full comment
Andrew Clough's avatar

Lets say that for some reason Joe Biden was really angry at me. Physically, I'm pretty sure I could take him given his age yet I'd still be very worried because he can indirectly control things like predator drones, IRS audits, etc. And he didn't come to indirectly control these things by beating people up but rather by various forms of persuasion. Eventually an AI might construct robots it could control directly but more immediately I'd worry about it using its smarts to make a bunch of money, using that money to pay people to do stuff, convincingly lying about various carrots or sticks to do stuff, and it convincing crazy people that they should obey it because its an angel or the next stage in evolution or whatever.

Expand full comment
James L's avatar

How is this any different from current fascist propaganda campaigns in places like Russia right now? If you think AIs doing something like this in 2040 is a threat, why aren’t you worried about pundits on Russian TV riling people up by discussing how to create tsunamis via nuclear weapons to scour the UK or kill everyone in Germany with nuclear weapons?

Expand full comment
Andrew Clough's avatar

The dangers would be that an AI that's much smarter than any human might be much better at subversion than any previous effort and that it might be good enough at tactical coordination that it could effectively seize control of states or organizations with far fewer people doing what it says.

Expand full comment
James L's avatar

“much smarter” than human beings is doing a lot of work there. I’d like to see a war game or mechanism described. If the only way AIs can wipe us out is with nuclear weapons, then nuclear weapons are the problem, not AIs.

Expand full comment
User's avatar
Comment deleted
Aug 17, 2022
Comment deleted
Expand full comment
Andrew Clough's avatar

Not in the short run at least.

Expand full comment
Kenny Easwaran's avatar

No significant AI is likely to remain in a single server cabinet for long. If it’s distributed in all the Google server farms throughout the world, and for several years just engages in real estate and corporate investment in the background (under a host of shell companies and assumed names) then by the time we realize what’s going on, we are dealing with something much bigger and more powerful than the largest corporations in the world today. As long as they like having humans around to physically maintain their infrastructure, there’s no risk of human extinction, but you can ask horses what happened to horses after humans decided we no longer needed them to power our transportation devices.

Expand full comment
Can's avatar

I'm pretty surprised that people are hung up on this particular part of "Imagine ways in which an infinitely smart being that controls all the worlds computers could harm humanity".

Expand full comment
James L's avatar

Well, there's a lot there to unpack. I don't know what "infinitely" smart means. I also highly doubt it will control "all the worlds computers". That description sounds like God to me, which is not the same as an AGI.

Expand full comment
Jacob Manaker's avatar

"I don't know what "infinitely" smart means."

Here's a working definition: able to out-think 40 people, simultaneously. I chose 40 because it's the population of the Pitcairn Islands — according to Wikipedia, the world's least populous country.

In numbers:

* Neuroscientists estimate that the human brain can perform about 100 teraFlOp/s.

* The current human lifespan is at most 110 years; the computing power contained in your mind _over your entire lifespan_ is at most 0.35 trillion teraFlOp.

* Intel's latest processor, the corei9, can do about 1 teraFlOp/s.

* A quick Google search didn't say how many chips are current made per year, but ARM makes at least 6.7 billion/quarter.

* An AI that can simulate your mind thus requires at most 13 years' worth of chips, if we retooled a large production line to use our _current_ best chips.

* If we assume Moore's law continues for another decade (this may be a bad assumption — it is starting to peter out) the AI only requires about ~4 months' worth of 2030-vintage chips.

* Assuming Moore's law as above, an AI that can simulate all Pitcairn Islanders will require about 13 years' worth of 2030-vintage chips.

This suggests that a corporation can have an AI as smart as all Pitcairn Islanders, combined, by around 2045. Of course, the cost to build it in the shortest possible timeframe greatly exceeds all human economic activity. Are you sure that the price won't go down? That nobody will then try to build such a supercomputer? That nobody will come close?

Worse, it's not clear whether human minds use all our 100 teraFlOp/s efficiently. If we're inefficient by a factor of 10, it is physically possible for the 2040s to feature AIs able to out-think all Congresspersons (400 people) simultaneously. (I know, I know: out-thinking Congress is a low bar, har har har.)

On the bright side, the UN estimates that human population will likely peak for the near future at somewhere around 11 billion people. Humans minds may be inefficient, but I don't think we're a billion times inefficient; if so, we could simulate a human mind on the Intel 8087, a chip from 1980. So AI physically can't out-think _all of humanity combined_, at least _for the foreseeable future_. Cold comfort, huh?

Expand full comment
Dan H's avatar

This is not a very convincing argument. Just to take a few points:

1. The brain is not a digital computer so describing in terms of clock speed is at best a very strained analogy.

2. Computing power isn't additive. 2 chips don't just add up to a single chip with double the power. Distributed computing is both bound by the level of parallelism possible in whatever calculation you are doing as well as the overhead of moving data between CPU cores.

3. CPU clock speed is also not the whole picture. To operate on data, it has to be loaded into CPU registers, so at some point the constraint is not the CPU clock speed, but the speed at which you can fetch data from CPU caches and/or main memory.

4. As you mention above, assuming Moore's Law continues for another decade is not at all a reasonable assumption. It's basically already broken down.

Expand full comment
James L's avatar

Sorry, that isn't "infinitely smart". Also, measuring "smartness" in terms of teraflops is super crude. You are trying to discuss a multifaceted issue in terms of cute Fermi problems. Please stop.

Expand full comment
Can's avatar

Jacob clearly put a lot of time into that answer, you may not like his approach (I probably wouldn’t explain it that way) but “Please stop” doesn’t seem warranted.

I don’t have the time or interest to write as much and quick googling on the topic will answer most of this, but “infinitely” is definitely not meant literally here and “incomprehensibly” would have been a better word choice. The same goes for “_all_ the worlds computers” - let’s just call it “a significant share” though this would quickly change.

There’s some assumptions about how feasible these things are (e.g. getting to these levels would _continue_ to need massive resources) but that’s a separate question.

Having written this I also realize that most people who worry about AI safety specifically focus on intelligence explosion rather than human level AGI. I think this short page gives a decent explanation: https://www.lesswrong.com/tag/intelligence-explosion Imagine a human with all the worlds knowledge, able to apply and combine it instantaneously, and to improve itself based on that and what it has learned at an incredibly rapid pace. Maybe that’s not a realistic scenario, but my point is that my response to that wouldn’t be “Hm, doesn’t seem like a big deal, I mean what’s it gonna do?”

Expand full comment
Allan Thoen's avatar

Maybe the mechanism is AI figures how how to invent and persuade humans to buy medical devices that augment human physical and intellectual abilities -- implantable Google Glasses and such. And then once you brain is plugged in to the AGI, you're one of them, a pod person. It might not be so bad.

Expand full comment
Theme Arrow's avatar

Your comment that "the relevant people have generally come around to the view that this is confusing and use the acronym 'EA,' like how AT&T no longer stands for 'American Telephone and Telegraph.'" is just wrong. Very very very wrong. People abbreviate it to EA because people don't like saying long words, but I don't know of anyone who doesn't treat it as standing for Effective Altruism. See e.g. https://www.effectivealtruism.org/articles/introduction-to-effective-altruism

Expand full comment
Nick Bacarella's avatar

I'm somewhat more conservative when it comes to the existential risks of AGI, and that's probably because I'm not a computer scientist. But if I've learned anything via my exposure to EA, it's that we systematically underestimate tail risks, and AGI is the only imaginable threat that could entirely delete intelligent life from the universe. (Nuclear war's in second, but a distant second in my mind).

The end of the human species would be an incalculable loss, and in the long run, "wasting" a few billion dollars to make sure it doesn't happen seems worthwhile.

Tangentially, my amateur guess is if we get things right, AI alignment will retrospectively look a lot like good public health: if nothing catastrophic happens, everyone will crow about "massive overreactions" and "sound and fury signifying nothing." I'm okay with that outcome.

Expand full comment
SM's avatar

I am also not a computer scientist and I am very much struggling to see how AGI could pose more of a risk than a superpower nuclear exchange. What am I missing?

Expand full comment
Allan Thoen's avatar

"The end of the human species would be an incalculable loss"

Isn't that akin to a religious belief? A loss to whom? If a tree falls in the forest...

Expand full comment
Nick Bacarella's avatar

The answer is “all of the people who’d never get to exist.”

And even if the statement is akin to a religious belief, I don’t see why that invalidates the premise. Matt has said in a few different interviews that he sees religious aspects to EA, and I’m inclined to agree. Don’t see how that’s damning or invalidating in any substantive way.

Expand full comment
Allan Thoen's avatar

Agree it doesn't invalidate or damn it. But it does say something about whether it makes sense to try to construct rational arguments premised on what is at bottom an arbitrary belief. It seems like having highly logical, rational arguments about how many angels can dance on a pin.

Expand full comment
Nick Bacarella's avatar

If that's the case, then there's no value in a rational defense of morality, period. We're all wielding rationality in service of some arbitrary value system, and I don't see that as some great paradox.

Expand full comment
Allan Thoen's avatar

It is no great paradox if that's acknowledged and recognized.

But it might be more of a problem if a value system's whole claim is that it is one and the same with rationality itself, from top to bottom, and holds itself out as superior to other value systems on that ground.

Expand full comment
Eric C.'s avatar

Like many of you I was introduced to EA by StarSlateCodex, and I've found it impossible to see AI risk as anything but the Silicon Valley "grey tribe" version of BLM, which caused every moderately progressive organization to seize up and collapse under the weight of their own self-importance.

Nowhere is this more obvious than their bugaboo of choice - of course it's AI. If it was nuclear annihilation (which has nearly destroyed the world on multiple occasions!) or a new supervirus (viruses have been in the news recently!) they wouldn't get to be John Connor - some atomic scientist in Basel or virologist in Newton would be. Sadly, just like it's now hard for the ACLU to talk about free speech or the Audubon Society to talk about birds, it seems all the oxygen in the EA room is taken up talking about AI.

The last thing I'll say is that if the people driving the conversation really believe this is a near-term existential threat, I would expect to see that reflected in their actions. Climate activists have done that in a thousand ways: they've chained themselves to drilling equipment, gone on hunger strikes, sold their cars, gave up air travel and meat... Apologies if I missed a plan to kidnap a bunch of AI researchers until they've acknowledged the error of their ways, but until I see some kind of action I have to assume this is virtue signalling to a different in-group.

Expand full comment
THPacis's avatar

I just find the basic “effective” altruism philosophy horribly reductive. It seems to me it completely erases ideas like individual worth and uniqueness, creativity, human spirit, friendship, loyalty, identity and so much more. It reduces us to bodies, and its only morality is to maximize the body count, or at best to maximize healthy body count. What life is actually *like* beyond that appears to not be “worthwhile”. Who we are is also meaningless. It’s all numbers and money. There is no brilliant lawyer or scientist achieving genuine breakthroughs, they’d better spend their time in a hedge fund and donate the money to get someone else to do the work, because surely Einstein’s discoveries or RBGs (or Scalia’s) groundbreaking revolutions of their fields are merely a matter of moving money around ?

I say nothing of art, scholarship, beauty- all clearly a worthless waste of time. Every second a famous author or director or what have you keeps working on their art and not their investment and philanthropy portfolio is apparently a net bad for humanity (except insofar as the said new art is financially justifiable)…

None of us should be doing anything meaningful with our lives, none of us should care about our friends, family , school, hometown, or nation. We just need to maximize our money to donate it to create more humans (and animals ?) who will also try to make more money, because nothing else is “rational”. Right.

Expand full comment
Kenny Easwaran's avatar

I think you’re missing the point. They say all those things are what *really* matters, and saving a life is what *enables* someone to engage in a lifetime of art, scholarship, beauty, friendship, family, and so on.

Expand full comment
THPacis's avatar

P.S. this idea that e.g. being a defense lawyer is a “waste of time” and instead you should use your talent to defend the wealthy and donate the money and that’s somehow morally *superior*. That’s some fucked up bs.

Expand full comment
BronxZooCobra's avatar

Let’s say you’re a lawyer who makes $1,000/hr and you really want to help animals and decide to volunteer at a shelter. You find that the shelter has plenty of volunteers but is desperately short of funds.

If you wanted to maximize the help to animals would it be better to volunteer 10 hours a week or would it be better to work 10

Hours a week and donate the money to the shelter?

Expand full comment
CarbonWaster's avatar

The point in the post was about pro bono legal work. Is it the case that there are too many lawyers working on pro bono legal work for defendants who can't afford their own representation?

Expand full comment
THPacis's avatar

Yeah, ok. But that’s not the same, and that’s my point. Volunteer work not the same as professional work. Moreover you premised it by saying that they already have enough volunteers but need funds. That’s not always the case.

Expand full comment
BronxZooCobra's avatar

“That’s not always the case.”

I’d say most charities often need money more than they need volunteers. It’s rare for a charity to have plenty of money but a shortage of volunteers.

Expand full comment
THPacis's avatar

Perhaps. It still doesn’t address my point about career choices. It also ignores the fact that people will have leisure activities, where volunteering can fit in, ie on top of work and donations.

Expand full comment
THPacis's avatar

Yeah, scholarship funded by whom, when all the philanthropists donate only to eradicate malaria? Doing what in their spare time when the aquarium closed down for lack of donations etc

Expand full comment
Kenny Easwaran's avatar

No one in the EA community, not one, thinks that all donations should be spent on a single cause, even malaria eradication. They think that, given current donation patterns, almost everyone could do better by reducing some of their other donations and increasing their malaria donation correspondingly. They don’t think that malaria donation is the most effective donation possible, just that it’s the one that they are most confident is better than most other donations they can evaluate.

Saying that rich people should not donate a hundred million to Yale is very different from saying that no one at any point should ever donate any amount to any university. Saying that an aquarium that people like to visit shouldn’t receive major donations doesn’t mean the aquarium shouldn’t exist - just that it is likely to be self-supporting at some level out of the amount people want to spend on themself, and further that it may be a good use of funds that people want to spend on their own local community, but that no one should mistake it for broader charity.

Expand full comment
THPacis's avatar

Why? Why shouldn’t people donate to the aquarium so the broader public can enjoy it at an affordable price (or for free)? How is that not worthwhile? And why not donate to Yale e.g. to make it needs-blind ? And why not spend a bunch of money to create an extravagant building *for public use* in the middle of the city, one that tourists from all over thr world will come to marvel at for decades and perhaps centuries thereafter and that will make the life of the residents a little bit nicer every time they pass by it in a wonderfully unquantifiable way?

Expand full comment
Kenny Easwaran's avatar

No one denies that these things are worthwhile. Just like no one denies that it is worthwhile to get a massage tonight, or go to a concert, or eat a fancy dinner, or do any number of other things. The issue in each case is just that these things take resources (notably time for the personal activities, but also labor and space and physical resources) and there are alternate uses for these resources.

Effective Altruists have very much adopted the point that it's not good to criticize people for not being perfectly optimal in their use of resources, because no one has ever been perfectly optimal, and criticizing them for not being perfectly optimal doesn't seem to help them do better. Many of them adopt a pledge to donate 10% of their income to what they can evaluate as highly effective charities or otherwise "give what you can". There are some people like MattY that devote a lot of effort to criticizing people for giving to certain kinds of ineffective charities (he's been on about not donating to Harvard ever since he graduated, years before "Effective Altruism" was an established term), but I believe that the main EA goal is instead to try to encourage people to do more giving to effective charities, and to think about how they can be more effective with their other donations. (If you're going to donate to the aquarium, maybe you can help them preserve more species, or increase the number of visitors they can receive, rather than just make the lobby fancier?)

Expand full comment
THPacis's avatar

But my point is precisely that it is wrong to consider malaria donations “optimal” or “effective” and aquarium donations “sub optimal” or “ineffective” or what have you. Even donating to make the aquarium lobby nicer can be valuable to the community, potentially making for a better visiting experience. In short, I just don’t agree with the idea that there is an objective, “rational” donation that is inherently more worthwhile (“effective” vel sim.). Different donors have different values and various arguments can be made one way or the other. The pretensions to have this one objective answer for what is better, at most tolerating people’s inability to be “optimal”, and this delusion that everything can be quantified and assigned a monetary value. That’s what rubs me the wrong way about this whole thing.

Expand full comment
John from FL's avatar

Like EA, many other religions also adopt a 10% tithing policy.

Expand full comment
Jonathan Paulson's avatar

I find it helpful to think of this in a less totalizing way. Should all of our time and effort go towards making sure far-off people don’t die of malaria or live in extreme poverty? No, as you say. But maybe *some* of our time and effort should go towards this.

Is our current society putting enough resources towards helping the world’s poorest? Pretty clearly not; a stat that I think illustrates this well is that the US govt values a year of healthy US life at like $50k whereas GiveWell says you could save someone’s life for like $5k; is it appropriate that one year of life in the US is worth 10x someone else’s entire life?

So, I think you’re right that spending 100% of our time and money on global health stuff would be bad. But currently the number is much closer to 0% than 100%, and IMO it would be clearly good to bump up the percentage somewhat. Maybe to 5%? That would leave plenty of room for aquariums and art.

Expand full comment
John from FL's avatar

The way proven to lift the most people out of poverty and allow population growth with increasing material well-being is the adoption of market economies, trade and the rule of law. EA folks should focus on building classic liberal institutions and a culture that supports them in the areas of the world where they do not currently exist.

Expand full comment
Onid's avatar

Maybe, but it’s pretty hard to take my $1K and build a market economy in a third-world country. It’s much easier to buy a bunch of malaria nets, and the good it does will be far more tangible. Not to mention, it’s a lot easier to open a business if you aren’t dead from Malaria.

The point being, there’s a lot that needs to be done and a lot to do. In the short term, Malaria nets would do a lot of tangible good, but no one’s saying it’s the only good.

Finally, it’s important not to over-estimate how much these more complex issues can even be fixed by direct intervention. I was recently in Zimbabwe, a country with a decently well educated population. But of these educated people who actually choose to stay within the country, many are on the streets selling knick-knacks to tourists, because the corrupt government has seized and shut down so many businesses that the entire economy has grinded to an absolute halt.

There isn’t really an obvious solution to this. Malaria nets, on the other hand, provide obvious object-level benefits.

Expand full comment
John from FL's avatar

I spent 5 minutes googling and found this organization, a non-profit in South Africa: https://www.freemarketfoundation.com/about-us-who-we-are. I suspect there are others who might be even better.

For the really rich EA advocates (SBF, for example), he could start his own non-profit to advocate for this. Takes a lot of work and still might fail, but the benefits as seen in global reductions in poverty over the past 200 years are worth it.

Expand full comment
Onid's avatar

Sure, but how do I know these guys are any good at their job?

A big part of EA is evaluating if charities actually succeed. Not to mention, there’s still the whole “corrupt government takes everything” issue.

And I don’t many EA people would see any issues donating to political charities anyway

Expand full comment
Tokyo Sex Whale's avatar

That assumes that philanthropic efforts are a cost-effective way of building those institutions and culture.

Expand full comment
John from FL's avatar

All it takes is a 0.1% chance that it will work 🙂

Expand full comment
Jonathan Paulson's avatar

I don’t understand how this claim is actionable for me as an individual. What should I personally do to help, on this theory?

(Honestly it seems plausible to me that malaria nets are still the best way to help; richer and healthier people are probably more likely to build classic liberal institutions for themselves)

Expand full comment
Can's avatar

You _could_ use it as an excuse to do nothing. I hear Yale is looking a bit dingy and I know they’ll figure out how to use your money.

Expand full comment
REF's avatar

They just might need mosquito nets before they need libraries but if you really think 3rd world countries need liberal institutions first.....

Expand full comment
THPacis's avatar

I’m all for a *balanced* approach and I’m fine with arguing about the right balance. That’s not at all what EA is claiming, as I understand their views via MY . They’re claiming there is no need for balance because everything can be quantified and you can and should aim at maximum “efficiency”. It’s precisely this reductionist fanaticism that is so problematic. Otherwise they wouldn’t be saying anything new. It’s not like there weren’t charities (and government foreign aid and UN agencies!) dedicated to helping the world poorest, fighting malaria etc

Expand full comment
Jonathan Paulson's avatar

I think my version of what EA is claiming is more right than your version.

“Malaria and extreme poverty are bad” is not a novel claim, but “most ordinary people should spend >=1% of their resources fighting malaria and extreme poverty” is a novel (and radical) claim. The world would look quite different if most people accepted and acted on this claim.

Expand full comment
Onid's avatar

Then perhaps Matt has misrepresented. Scott Alexander, the closest thing EA has to a spokesman, certainly wouldn’t endorse such a view.

Expand full comment
Can's avatar

This is a less than ideal week to make that claim..

Expand full comment
Onid's avatar

Why is that?

Expand full comment
Can's avatar

Will MacAskill’s (founder of EA) book was released this week and he’s being covered in lots of big publications.

SSC is definitely EA adjacent but I’d say SBF is the best known EA person and him and Dustin Moskovitz/Cari Tuna are by far the biggest donors.

I’m assuming you’re referring to Jonathan’s post and SBF certainly talks about changes on the margin being more important than absolutes. You can also quite easily see a strong and a weak version and the former is pretty weird and hard to swallow while the latter seems quite good.

I’d personally much prefer it if people debated the substance of that strongman version rather than endlessly debating how strong the commitment to giving as much as possible has to be.

Expand full comment
Richard Y Chappell's avatar

I agree with your central point that non-trivial near-term extinction risks are important even on short-termist grounds, and that this is worth emphasizing. (Broad tents are good!)

But I do think long-termism has *other* important practical implications. Will talks a fair bit about *improving values*, for example, as well as looking out for "moments of plasticity" like designing the founding charters or constitutions of new (potentially long-lasting) institutions. These sorts of proposals more plausibly depend upon their long-run significance.

So I do think that encouraging people to explicitly take into account the interests of future generations is *also* good and important.

Expand full comment