307 Comments

I already find these AI risk columns tedious and repetitive. Matt might as well have written multiple essays on how the Reasonabilists have new convincing arguments about how imminently Zorp the Surveyor is coming to destroy the world.

Expand full comment

"[The fish donors and pro bono death penalty attorneys] have decided, based on either their considered ethical view or a lack of adequate reflection, that maximizing the number of lives saved is not the most important thing."

I think this claim gives too little credit to the donors and attorneys, who probably wouldn't disagree that maximizing the number of lives saved is the most important thing. I think their reasoning – or, at least, my reasoning if I were in their shoes – would be that maximizing the number of lives saved isn't *their particular part to play in the betterment of society*. There is clearly no shortage of vital work to be done; I've figured climate change is a major threat to human well-being and have decided to try to make a career of fighting it, using what I think are my most valuable skills (research, communication, math, generally holding a lot of facts in my head). This isn't to disparage other priorities – it just seems like this is where I can help (it turns out employers disagree, though, so who knows).

One of the things I find most grating about social-justice leftism is its transcendental-meditation-esque belief that we can and should have a planned attention economy, wherein everyone focusing their attention at the same time on e.g. Covid-19 or racial justice is necessary and sufficient to fix those problems. It would be a bummer if EA turned into just a mathed-up version of that. Not only are we never going to achieve worldwide consensus ordered prioritization of all good works that need to be done, there's a point of diminishing returns where by turning everyone's focus towards one thing we lose the gains from specialization. This isn't a criticism of EA as a philosophy, which is a valuable corrective to the norm of locally focused giving, as Matt points out. But it is a warning against the risk that the EA movement trips over its own feet in the future by not telling individuals to factor the existing landscape of philanthropy into their calculations.

Expand full comment

I'm a software security engineer, and I find worrying about AI killing us all to be absolutely off the wall bonkers.

I also think directing my charitable contributions towards malaria nets and deworming appears to be the correct way thing to do.

I really hope fear of Terminator doesn't crowd out actually useful empirical analysis of philanthropy.

Expand full comment

I admit this is probably a stupid question but in these predictions how does the AI defeat us? Is it Terminator style hack the ICBMs (or we stupidly put it in charge of the ICBMs) and it triggers a nuclear exchange? Something else like intentionally causing cascading infrastructure catastrophes? I ask because even the highest tech conventional military hardware is high maintenance and easily disabled so while commandeering some of it could cause damage I don't see how it gets to extinction level. I'm struggling to distinguish this from 'really pissed off computer virus' which while potentially pretty destructive wouldn't be hugely different from some of the cyber threats we live with today.

Expand full comment
Aug 17, 2022·edited Aug 17, 2022

EA has some good stuff going for it. They are risking it by "betting" on AI. And that's really what's happening here. It's not enough to say it could or couldn't happen, or it's a percentage possibility. Look how much crap Nate Silver gets for getting the 2016 election "wrong." EA will never live AI risk down if things start petering out in 10 years when it becomes clear there's no path to AGI.

In that scenario, do donors withdraw from EA projects? Does EA become a laughingstock? That's not far fetched to me.

EA, for the sake of it's incredibly effective charity giving record, would be much better off playing things conservative to maintain their credibility. But from what I can tell, it's a movement of young, starry-eyed tech-adjacent philosophy majors who are more and more making this movement about themselves (aka their own interests vs. what is objectively better for the movement)

Expand full comment

Feels like there are two claims in this column:

1) people worried about the risk of an AI catastrophe are hindering their own efforts by associating the problem with longtermism and EA. That puts it in the wrong frame. (Might be better for them to approach it as another Y2K-shaped problem?)

2) anyone trying to improve giving should worry less about fine-tuning the most effective giving, and focus on minimizing the least effective giving (eg to Harvard, Yale, aquaria, etc.).

Expand full comment

I have a strong visceral dislike of EA, and especially the kind of EA evangelism that seems to be cropping up more and more.

But I don't see any reasonable criticisms of people donating their money to their preferred altruistic causes.

Medium-term, I worry about EA becoming popular enough as a quasi-religion among progressives and other groups that wield disproportionate power through institutions, that it basically gets forced onto me and mine...but we can deal with that if and when it starts happens.

Expand full comment
founding

The technology, social-media and crypto boom has created enormous improvements in the well-being of people around the world. A by-product is the vast wealth that has accrued to the founders and early employees of those companies. And because of the short time from start-up to billion-dollar valuations, this wealth is landing on people who are still pretty young.

Some of these billionaires are driven to do more of the same (Zuckerburg, Brin, Andreeson). Some, I'm sure, will buy a sports team, a Gulfstream and a yacht and live a life of leisure. There is a subset who are convinced their photo-sharing app or Ruby-on-Rails talent proves their superior wisdom and intelligence. And this subset has decided that the world is desperately in need of those superior insights to re-shape society's political, economic and social systems to match their utopian ideals. In this last category, I put Peter Thiel, Sam Bankman-Fried, Chris Hughes (remember him?) and I'm sure others.

They are annoying and a bit condescending, but indulging these people's utopian projects seems a small price to pay for the value created by the technology boom. If it means we are subjected to rich people spending money to fund what seems to be a larger version of the 2 AM dorm room philosophical debate -- which is how I read this AI doomsday discussion -- it seems fine to me.

Expand full comment

The death of 99.9% of the human population would leave 7 million people, not 70 million, which according to your graph is a level we haven’t seen for 10,000 years. I don’t think it alters your conclusions, though.

Expand full comment

I struggle to see exactly how AI will take over the world and kill us all by 2040. The threat of nuclear weapons and almost total annihilation of the human race is real. Is the idea that AIs will kill us with nuclear weapons? The longtermists don’t seem to spend much time or energy on nuclear weapons. Climate change won’t kill us all in the next 50-100 years. Focusing on AI over nuclear weapons seems to be focusing on something interesting and less well-understood over something we boringly know can kill us all.

Expand full comment

frankly these people sound like they are talking about genies in lamps when they talk about machine intelligence. i have no idea what they expect to happen, seeing has how any AI would likely have its plans foiled on account of being a server cabinet.

Expand full comment

Your comment that "the relevant people have generally come around to the view that this is confusing and use the acronym 'EA,' like how AT&T no longer stands for 'American Telephone and Telegraph.'" is just wrong. Very very very wrong. People abbreviate it to EA because people don't like saying long words, but I don't know of anyone who doesn't treat it as standing for Effective Altruism. See e.g. https://www.effectivealtruism.org/articles/introduction-to-effective-altruism

Expand full comment
Aug 17, 2022·edited Aug 17, 2022

I'm somewhat more conservative when it comes to the existential risks of AGI, and that's probably because I'm not a computer scientist. But if I've learned anything via my exposure to EA, it's that we systematically underestimate tail risks, and AGI is the only imaginable threat that could entirely delete intelligent life from the universe. (Nuclear war's in second, but a distant second in my mind).

The end of the human species would be an incalculable loss, and in the long run, "wasting" a few billion dollars to make sure it doesn't happen seems worthwhile.

Tangentially, my amateur guess is if we get things right, AI alignment will retrospectively look a lot like good public health: if nothing catastrophic happens, everyone will crow about "massive overreactions" and "sound and fury signifying nothing." I'm okay with that outcome.

Expand full comment

Like many of you I was introduced to EA by StarSlateCodex, and I've found it impossible to see AI risk as anything but the Silicon Valley "grey tribe" version of BLM, which caused every moderately progressive organization to seize up and collapse under the weight of their own self-importance.

Nowhere is this more obvious than their bugaboo of choice - of course it's AI. If it was nuclear annihilation (which has nearly destroyed the world on multiple occasions!) or a new supervirus (viruses have been in the news recently!) they wouldn't get to be John Connor - some atomic scientist in Basel or virologist in Newton would be. Sadly, just like it's now hard for the ACLU to talk about free speech or the Audubon Society to talk about birds, it seems all the oxygen in the EA room is taken up talking about AI.

The last thing I'll say is that if the people driving the conversation really believe this is a near-term existential threat, I would expect to see that reflected in their actions. Climate activists have done that in a thousand ways: they've chained themselves to drilling equipment, gone on hunger strikes, sold their cars, gave up air travel and meat... Apologies if I missed a plan to kidnap a bunch of AI researchers until they've acknowledged the error of their ways, but until I see some kind of action I have to assume this is virtue signalling to a different in-group.

Expand full comment
Aug 17, 2022·edited Aug 17, 2022

I just find the basic “effective” altruism philosophy horribly reductive. It seems to me it completely erases ideas like individual worth and uniqueness, creativity, human spirit, friendship, loyalty, identity and so much more. It reduces us to bodies, and its only morality is to maximize the body count, or at best to maximize healthy body count. What life is actually *like* beyond that appears to not be “worthwhile”. Who we are is also meaningless. It’s all numbers and money. There is no brilliant lawyer or scientist achieving genuine breakthroughs, they’d better spend their time in a hedge fund and donate the money to get someone else to do the work, because surely Einstein’s discoveries or RBGs (or Scalia’s) groundbreaking revolutions of their fields are merely a matter of moving money around ?

I say nothing of art, scholarship, beauty- all clearly a worthless waste of time. Every second a famous author or director or what have you keeps working on their art and not their investment and philanthropy portfolio is apparently a net bad for humanity (except insofar as the said new art is financially justifiable)…

None of us should be doing anything meaningful with our lives, none of us should care about our friends, family , school, hometown, or nation. We just need to maximize our money to donate it to create more humans (and animals ?) who will also try to make more money, because nothing else is “rational”. Right.

Expand full comment

I agree with your central point that non-trivial near-term extinction risks are important even on short-termist grounds, and that this is worth emphasizing. (Broad tents are good!)

But I do think long-termism has *other* important practical implications. Will talks a fair bit about *improving values*, for example, as well as looking out for "moments of plasticity" like designing the founding charters or constitutions of new (potentially long-lasting) institutions. These sorts of proposals more plausibly depend upon their long-run significance.

So I do think that encouraging people to explicitly take into account the interests of future generations is *also* good and important.

Expand full comment