363 Comments
User's avatar
Xavier Moss's avatar

I think with a lot of the EA movement specifically, a lot of people have trouble squaring the source of funds and their use – you can buy someone like Elon Musk or Bill Gates as a philanthropist, they made money by making things, but many people (including myself) perceive crypto billionaires as having made money off false promises to everyday people. If you see crypto as a ponzi scheme, it becomes harder to believe the people in it are genuine altruists.

Obviously the classic argument exists that you can do some evil to make money to do good of greater utility, but we know human beings are self-serving, and this is just a retread of the politicians who compromise everything to win power because 'without power we can't help anyone.' Well those people get power and usually don't help anyone anyway. You can also argue that they think crypto is good but, well, I'm not sure that's an argument for the effectiveness of their altruism.

I'm a believer in the principles of effective altruism – in fact, I've structured my career to do meaningful work on an important public health problem, and taken a large pay cut to do so – and I'm funded by one of the large foundations, so I see the role rich funders play. But as often happens in the modern age, being effectively altruistic and being 'in the EA community' are not the same thing, and these subcultures form, become insular, and lose sight of their goals. Long-term utility is precisely one of those areas that allows human beings to subconsciously skew their thinking around their own selfish interests, and in fact the 'rationalist' community seems to largely be about internal status-seeking around issues like AI risk than thinking through real problems, as well as making the fundamental 'the world is just an engineering problem' error common to both crypto and online rationalism.

Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all.

Expand full comment
Dan H's avatar

"Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all."

I'm not sure I understand. He found a crypto trading arbitrage that generated millions of dollars which he intended to give away. He went from that to founding FTX which mad him a billionaire and allowed him to give away vastly more money. Would it have been better if he stayed at the Centre for Effective Altruism?

Expand full comment
Xavier Moss's avatar

I'll concede here that this was a bit of a hyperbolic flourish to end the comment – in the end, in this particular case, it is better for the world that he is able to give vastly more money to (hopefully) effective causes.

But in general, I find this mythical certainty to be ethically problematically, like all those actors who came to LA just 'knowing' they'd make it and no one believedin it and here they are at the Oscars. Well, no one's interviewing the waitresses and the busboys, are they? So most people who do what he did here are pretty likely spending years trying to get their business to work, or even more likely failing to do so, as most people do. A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan. What if it was just luck that it worked for him, and we haven't heard from the dozens of finance people who are telling us that it'll all be worth it when they're billionaires and save the world?

Broadly speaking, I think most consequentialists undervalue the uncertainties of the human mind, especially their own. So I think taking a decision like this, where you go from definitely working on something good in a small way to devoting yourself to making money exclusively with the intention of giving it away, undervalues your own uncertainty about both your success and your future decision making. I think if you took everyone who told themselves 'I'll just focus on being rich now, I'll do some good later', I'm not sure you'd wind up with much of a net good in the aggregate.

Expand full comment
Dan H's avatar

"A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan."

I don't know, I'm not sure I've seen this. In fact I have the opposite experience with EA folks in that they tend to fetishize probabilistic thinking to an unhelpful degree sometimes. But more to the point I seriously doubt that SBF was certain it would work out and he would become a billionaire. In fact, in interviews I've heard with him he talks about giving FTX a ~10% chance of working out (i.e. not failing outright). But even in that case it is still a positive expected value play. If it fails, he goes back to Alameda of some other prop trading firm and continues giving a few hundred thousand a year to charity. But if it works, it could be huge and he could be giving away billions (which is what happened). Even if that latter is unlikely it is still worth doing because the payoff is huge.

More generally, I think the way you characterize the amount of certainty in the EA community is not accurate. My experience reading EA thinkers is that wrestle with uncertainty in a very clear-headed way.

Expand full comment
James M's avatar

I second this- one main piece of philosophy that SBF has talked about specifically in this vain is that there is decreasing marginal returns for making more money for an individual, but far, far less for trying to improve the world, so people should be less risk averse and try high-variance earning to give strategies. As a community, if we have 100 people making 350k/yr or 100 people all trying to make billions with a 1% chance of success, we'd have more money in the latter option- and he thought he had a 10% chance of success, so even better that he took that bet

Expand full comment
James L's avatar

The argument is that high risk earn to give opportunities like entrepreneurship are positive expected value, no that you're certain to succeed. If for every dozen EA entrepreneurs you expect one to make hundreds of millions that they give away, that's a good deal even if the other 11 fail.

In-general many EAs argue for being risk-neutral, so taking riskier bets with your career than you otherwise would if you were just earning money for yourself.

Expand full comment
Xavier Moss's avatar

Maybe some of my antipathy comes from interacting with people in the world – I have spent a large amount of time in the humanitarian sector and a fair bit in the well-meaning-startup space – and the vast majority of the time EA was used as kind of a prosperity gospel. 'The only thing I owe to the world is to become a billionaire – nothing matters ethically beyond that. If I do, I can start being a good person. If I don't, statistically, someone else did, so I can continue giving nothing back.'

Maybe this is an unfair characterisation, but human beings being who we are, I think it's a philosophy that's very self-serving. I appreciate the probability focus the writers for EA websites are doing like – like I say, I genuinely believe in lower-case effective altruism – but what I've seen in the real world is either an Ayn Rand level of selfishness justified by deferred responsibility, or throwing money into cool-looking but useless projects in the developing world. I presume the counterargument is that since it's ineffective it isn't by definition EA, but at some point the debate starts being academic.

Expand full comment
Azareal's avatar

Pretty much everyone I know in EA is in law or finance, making 500k - 3 million a year (not billionaires) and is mostly giving money to organizations like Give Directly or others ranked very high in GiveWell. Maybe there are people like the ones you describe, but my guess is that EA is a handy excuse and if they didn't have that, they'd make up something else.

Expand full comment
James L's avatar

I think you're right to push back against people who say they don't need to bother doing good until / unless they become a billionaire. That's on the more extreme end of what I've heard, but arguments like this are a problem in EA and tend to be a huge turn-off to great people (like you!) who might otherwise be valuable community members. For what it's worth the EAs who I know basically universally hate Ayn Rand.

I'm curious - do you remember any examples of cool-looking but useless projects in the developing world that you've seen people throw money at?

Expand full comment
Xavier Moss's avatar

what3words is my favourite example of absolutely terrible ideas that sound good to engineers. There have also been a couple around using blockchain for land registries which completely fail to take into account how smallholders are dispossessed (the person who can take your land by force can force you to send him your token as well). Numerous platforms of the general '[X] for [Y]' concept (LinkedIn for Refugees! Etsy for Weavers!) that never offer any advantage over existing commercial sites. Several things with drone delivery, although that one I concede may work if done correctly. There have also been any number of photo op projects with crypto dudes coming into small villages to take pictures, I'm sure I can dig some up.

To be fair, I myself have built any number of completely useless and expensive software platforms funded by governments and NGOs in the developing world – I'd say 1 in 5 were any good when I was consulting – so finding a project that has real, direct impact is hard. And I'm very critical of the aid industry as well. But a lot of the Silicon-Valley-engineer projects parachute in with lots of hype and just make no sense given the realities on the ground.

Expand full comment
Johnson's avatar

The Hollywood waitress analogy doesn't work at all. Most no-name EAs are software developers, traders, corporate lawyers, etc. who still make buckets of money even if they aren't billionaires. Tbh it's kind of hard to not make money if you're a Stanford CS graduate, which highly disproportionate number of EAs are.

Expand full comment
David G's avatar

From Matt's description of EA, I'm pretty sure the Chinese Communist Party would describe itself as the world's largest and most successful enthusiasts. The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat.

Expand full comment
Dan H's avatar

"The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat"

I'm not sure where you're getting this but I say that this is very much not the gist of EA. Have you actually heard anyone in the EA community say or write anything resembling this?

Expand full comment
David G's avatar

I don't suppose SBF also has views on tax policy and regulation of crypto that he'd expect his preferred candidate to share, not to mention income inequality. Perhaps he could share those with us. The notion that the effective altruist billionaire class should determine what are the world's largest problems and their solutions instead of voters and their representatives seems laughable to me. While getting huge tax deductions payable by the rest of us for their pet causes.

Expand full comment
smilerz's avatar

There was an election - SBF didn't decide anything for anyone. And, even if elected, Flynn still had to participate in the legislative process along with all of the other elected officials.

Expand full comment
KetamineCal's avatar

Everyone has blind spots. And I assume most people would say "the government is stealing from poor people by taxing EAs" is a bridge too far, even if it may technically be true in some cases.

I think most EAs DON'T think that but it's pretty easy to expect some "libertarian" type will eagerly co-opt it. Humility is often a surrogate for trust and the EAs don't exactly have that.

Expand full comment
KetamineCal's avatar

I think a lot of it is that crypto bros can be unbelievably self-confident and annoying. No one is projecting humility from that space. I assume many SB readers have been the "young bright person at work who wants to and knows how to improve things but gets ignored" and this is a similar phenomenon.

Expand full comment
James L's avatar

I have another comment below with some of your points that I disagree with, but for what it's worth I do think you're right about internal status-seeking around AI safety and insularity in general being cultural problems within (some subsets of) the EA and rationalist communities. I think AI risk is a real and important problem and want the people who work on these problems to have a healthy community.

Expand full comment
Paul's avatar

Has anyone seen an attempt to figure out how we weight the negative effects of different careers in the "earn to give" approach (or traditional philanthropy model) - that would seem necessary to make choices regarding effective altruism? Obviously it's going to be hard to agree on anything for something new like crypto, but for something like a more typical "career in finance", can we say anything specific about the impact? If I make X million from pure trading activity, then other people have clearly lost X million, resulting in negative impact for those individuals? There would also be indirect impact from supporting the general trading system and the degree to which it might push companies into making short-term decisions that may harm employees or customers, but that's going to be rather more difficult to measure (and would quickly turn into a debate regarding pros and cons of the whole economic system...)

There's also a separate interesting question of the impact of trying to maximize your income within a given company, if increased wages / bonuses for some categories of employees leads to pressure to reduce wages for others?

Expand full comment
BronxZooCobra's avatar

<I>If I make X million from pure trading activity, then other people have clearly lost X million</I>

That’s not correct. If you think oil prices are going to fall you could sell Delta Airlines a futures contract to deliver X barrels of oil to its refinery in six months. If you’re right you can buy the oil for less and sell it to Delta at the agreed price. Delta is fine with that as they need predictability to price tickets, plane routes, etc.

Expand full comment
Jason Sauby's avatar

I think that's true enough when we're talking about those fundamental transactions, but are we operating in a world with so much money sloshing around in the financial sector that it's effectively a different thing now than what we were taught in Econ class?

Expand full comment
Azareal's avatar

The amount of "finance" that is speculative trading activity is just not that high. You are missing all the things like routine financing, private equity (which despite the weirdly bad publicity just comes in and runs companies better to IPO or sell), and risk distribution for existing lending (eg CLOs where investors take the risk and the bank books immediately).

Realistically, though, there just aren't enough hedge fund jobs to make this a big problem.

Expand full comment
Nicholas Decker's avatar

Trading is a positive sum activity. You are finding more accurate information. If it is in fact zero sum, then people would leave the market if they’re getting beat.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Xavier Moss's avatar

We can argue about the usefulness of crypto, but the first statement isn't true. If I tell you I have a product that can turn lead into gold, and sell it to you and it does no such thing, that's not a useful product. Obviously I created something you wanted, say false hope, that might have 'utility' under some strict definition, but most people would consider the product itself as useless and the transaction as fraudulent. Information gaps prevent perfectly efficient markets.

Expand full comment
Sharty's avatar

Cryptocurrency is useful in that it very efficiently identifies people whose opinions I can safely ignore.

(it is worse than existing techniques for *everything* else)

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Tdubs's avatar

"You can't trick millions of people for over ten years."

*Atheists have entered the chat*

Expand full comment
Marc Robbins's avatar

I'm very frustrated when the "heart" function doesn't work (maybe someone can explain why).

Anyway, Tdubs: "heart."

Expand full comment
City Of Trees's avatar

Substack has some sort of lag problem in its UI between the clicking of the heart (which does get registered immediately) and the display of showing that you clicked the heart. It also seems to be worse on non-top level comments. My advice would be to just click it once and wait, as clicking it again might undo the like.

Expand full comment
Bo's avatar

I wish that I had more than one heart to give friend.

Expand full comment
Alex S's avatar

Traditionally people probably didn’t really believe the stuff they showed up to church for. They certainly didn’t read their scripture or understand their Latin speaking priests - the idea that you’d even want to do that is an invention of Protestantism. (and the idea that Buddhists did that comes from them importing Protestantism too)

Traditionally the church in England 1. told people not to have sex the wrong way 2. was used to fundraise constant parties for “saints’ days”.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Matthew S.'s avatar

Has had the unfortunate historical precedent of being harmful to those who "don't* believe it, though.

Expand full comment
Jim #3's avatar

I'm not anti crypto, call me neutral on it, but you just argued crypto is useful because people (specifically yourself?) made money on it. Not sure that logically computes...

Expand full comment
Alex S's avatar

> There are no information gaps in crypto

Well, there are because essential components like connections to the real world can’t be “on chain”. Exchange behaviors, side bets, whether a project is a rug or not, that kind of thing. The big one being Tether, who wasn’t public at all until NY sued them and now seems to be lying about their assets.

Plus, public smart contracts regularly get hacked for huge amounts so it doesn’t seem the auditing systems work very well. I don’t think it can ever be safe to build on a system like this where mistakes/bad transactions can’t be reverted. The ability to do that is more important than any tradfi technical drawback.

Expand full comment
JohnPaul's avatar

The existence of social security is proof that you can trick hundreds of millions of people into supporting a Ponzi scheme for decades.

Expand full comment
CarbonWaster's avatar

'. . . the fact that it is still going pretty strong after more than ten years and has survived several major crashes shows that it is not a Ponzi scheme.'

It does not show that.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Jason Sauby's avatar

When people say crypto is a Ponzi scheme, I don't think it's meant literally; it's a metaphor. There may be some inherent value to the technology, but it appears to be wildly overvalued in the market at the moment, having a bubble is something that can persist for a long time, and people who are invested in the bubble have a material interest in keeping the bubble inflated.

For example, Tesla stock seems to be overvalued by conventional measures, even with the haircut it's been taking lately. That doesn't mean the company itself is running some kind of scam; the people I know who own one are very happy with the product. There's just an apparent disconnect between what the company is doing, and what the investors in the stock are doing on the secondary markets.

Expand full comment
CarbonWaster's avatar

There is a tedious debate many people have about 'what is a Ponzi scheme'. Crypto-skeptics often say 'crypto is a Ponzi scheme', and crypto-enthusiasts then dispute that it doesn't meet various criteria to be considered a Ponzi scheme.

To be honest, this debate does not interest me very much. Things change, and new things are made, and not everything is an exact analogue of some past thing. The relevant point is that crypto assets largely do not represent sources of wealth exogenous to the system of their trading (in the way that stocks, bonds or commodities do). Consequently, what crypto investing shares with a Ponzi scheme is that there is a broadly fixed sum of capital invested, and that what Person A gets in profit comes from loss from Persons B, C and D. In that sense, it is analogous to a Ponzi scheme.

Expand full comment
Alex S's avatar

Madoff’s never actually crashed and probably could’ve run for a long time. The government just spoiled it by telling everyone it was a Ponzi.

Expand full comment
Tdubs's avatar

I would be amazed if the percentage of total current crypto wealth owned by people in "imperialist" countries is lower than the percentage of world gdp that those countries represent. So your best case scenario is we've created a new paradigm that exacerbates historical wealth inequality?

Expand full comment
User's avatar
Comment removed
May 23, 2022
Comment removed
Expand full comment
David R.'s avatar

In order to generate consumer surplus and decouple wealth from land, technology has to actually *generate value*.

Distributed ledgers *may* someday do so in private applications.

The public ones which underpin current cryptocurrencies do not. They have value only insofar as a tulip bulb once did, as a vehicle for speculation.

I agree that *technology* is a good thing, but not all *technologies* automatically are.

Expand full comment
Alex S's avatar

> Distributed ledgers *may* someday do so in private applications.

I think they have a genuine mindset advantage in that you can easily delete/overwrite data in a traditional database and it’s difficult to replicate some of them. Immutable data is very useful for correctness and programmers aren’t taught to use it enough.

…but all that technology was invented in the 70s and “blockchain” reinvents it in the least efficient way possible plus tries to add a smart contract language designed by amateurs that didn’t think about things like integer overflow first.

Expand full comment
Jim #3's avatar

Interesting. Can you link to anything to read on this?

Expand full comment
Doctor Memory's avatar

Maybe it’s “not about crypto” but if the biggest name in applied consequentialism right now is, essentially, in the business of marketing Ponzi schemes to the gullible as his day job, that seems kinda relevant to me when approaching the question of how much weight I should assign his opinions about existential risks?

Expand full comment
Kenny Easwaran's avatar

It’s an interesting new development. Up until a year or two ago I would have thought of Bill Gates as the one famous donor who was most explicitly affiliated with Effective Altruism, though he tries to cover it up in order to seem more normal.

Expand full comment
City Of Trees's avatar

Is he explicitly affiliated? I always took his philanthropic actions as a rather banal "help the world's neediest" without all the philosophical forays described in this article.

Expand full comment
Kenny Easwaran's avatar

Not quite explicitly. But even before the official Effective Altruism movement got started, the Gates Foundation got into the idea of evaluating their charity by concrete metrics. And Gates and Buffett created the Giving Pledge, inspired at least in part by Peter Singer.

Expand full comment
mpowell's avatar

It is just a completely different thing to try to objectively measure the performance of a charity versus what the EA people do. The difference is in the EA philosophy of what outcomes you consider desirable or how you weight their desirability. The Gates approach is perfectly understandable to a normal person, the EA part is not. That being said, I have no idea how much Gates buys into the EA part or not.

Expand full comment
Dan H's avatar

I don't think that's true. "Objectively measure the performance of charity" is exactly what GiveWell does, which is probably the most widely known and influential EA organization in the world.

In fact, "objectively measure the performance of a charity" is a pretty good description of what EA is. It's just that words like "measure" and "performance" aren't so clear when you try and dig into the details. Performance relative to what outcome? What are we measuring exactly? And so we get a lot of internet arguments about what outcomes we should care about (including how much to care about people in the far future, etc).

Expand full comment
mpowell's avatar

I think my post was unclear. The philosophy of the EA movement that we should not place any more weight on our own community comes in addition to the attempt to objectively evaluate charitable impact. But I believe these are completely separable ideas and the EA specific one is a completely foreign one to most people, while the objective evaluation is perfectly normal to them.

Expand full comment
City Of Trees's avatar

But I see that as the sleight of hand that EAs use to nudge people toward their preferred causes. I may be determined to give a non-EA-preferred cause, but I would also like to know which charities have the better performance in achieving that non-EA-preferred cause.

Expand full comment
Can's avatar

Most famous definitely. But while he's a lot less famous, Dustin Moskovitz and Cari Tuna were the main big pure EA donors previously.

Expand full comment
Alex S's avatar

Elon is the most famous person to be literally associated with the religious EA people - he got his e-girl musician girlfriend by making a LessWrong joke at a party and founded OpenAI to literally do their silly ideas about evil computer defense. OpenAI of course abandoned that mission because it has nothing to do with any real life concerns.

Expand full comment
Doctor Memory's avatar

...four months later, this comment aged pretty well...

Expand full comment
smilerz's avatar

The vast majority of capital in crypto is not from retail investors.

Expand full comment
Doctor Memory's avatar

...so? If you think that institutional investors are immune from the delusions of crowds, I have a CDO bond that I'd be happy to let you in to the AAA-rated (by Moodys!) top tranche of.

Expand full comment
smilerz's avatar

I think you are entirely overconfident if you think it is obvious that institutions are being gullible by investing in crypto. And even if you are right "its not fair this guy is ripping off Wall Street and we shouldn't trust him" is a super weird take.

Expand full comment
Doctor Memory's avatar

- I have no particular insight into how much institutional exposure to crypto there is, and I very much hope it's "less than I fear"

- but I fear it's a lot: nobody knew how overextended on forex trading Barings Bank was until Barings ceased to exist and that was _one rogue trader_. One of the major morals of 2007 was, for me, that no compliance department on god's earth _actually_ knows what their trading desks are getting up to, and the ratings agencies will rubberstamp known bilge without a second thought.

- "ripping off wall street" is all fun and games until it tanks the entire economy taking your job and your retirement savings with it, which already happened once in the last 20 years and I'm not looking forward to Round Two: Crypto Boogaloo

Expand full comment
smilerz's avatar

unless you were dumb and pulled all your money out at the bottom you didn't lose all your money in the market at all in the last 20 years.

Expand full comment
Doctor Memory's avatar

Being able to ride out a market crash because you're in your 20-40s and have a stable job and can buy the dip is great. (Source: me, stayed employed in tech through the 2001 crash and the entire Great Recession, was definitely preferable to all the alternatives.) But allow me to introduce you to the extremely salient concept known as "time", which can crop up in two rather important ways:

- if you are approaching or at retirement age, having your pension/401k/IRA wiped out is, in fact, a pretty big deal: your heirs might get to ride the bounce back but there's every chance that you won't.

- rent (or your mortgage payment) is due on the first of the month and they don't care what happened to your retirement savings, nor do they care if you lost your job because of an economic crash. Similar dynamics apply to your grocery and heating bills.

I feel like "massive economic crashes are bad" should not be proposition needing to be re-litigated after 2008 but apparently here we are?

Expand full comment
Andrew J's avatar

I have really only come into contact with EA through Slow Boring, but, to be honest, they seem like a bunch of super creepy weirdos.

The no community ties thing is a nudge away from discouraging the formation of friends or families.

The "earn to give" thing has strong "prosperity gospel" vibes, where all sorts of shenanigans are self justified.

And the AI thing is just sorta weird.

Expand full comment
Alex S's avatar

I’m not sure if the EA philosophy leads to this, but the larger rationalist community does act like they’ve accidentally logiced themselves into joining a cult. They like to live in group homes, are into polyamory because they couldn’t think of logical arguments against it, and do sometimes join 100% literal EA cults like Leverage Research.

Beyond unhealthy absolutist philosophy, I think joining cults might just be what people do when they live in Berkeley. It’s in the water.

Expand full comment
Evil Socrates's avatar

This is a depressing comment.

"I spend lots of time and money trying to actively improve the world and improve thinking about same".

"Sounds like outgroup low status stuff to me, CREEP."

Expand full comment
Marcus Seldon's avatar

The lack of community ties thing troubles me as well. Yes, EAs say that you can still have friends and family, but only because people would burn out if they didn't. Which strikes me as abhorrent. My relationships with others are deeply important and morally justified, they aren't simply a means to end of giving me the emotional bandwidth to work longer hours so I can donate to EA charities. I think people are deeply entitled to pursue and value particular relationships and projects in their lives for their own sake. No exclusively of course, I do think everyone should try to devote some of their efforts to making the world better in a consequentialist way, but it's only one aspect of the good life.

Expand full comment
Marc Robbins's avatar

While there are some lovely things at the abstract philosophical level with EA, what would make the world a far better place is if we could somehow convince everyone to increase their normal charitable giving 10%.

You know, kind of a "slow boring" thing.

Expand full comment
Kenny Easwaran's avatar

There may be some who say this. But I think there are others who say that friends and family are what make life worth living and worth saving, and that you are a person just as much as anyone else is, and that you deserve as much time with your friends and family as anyone else. But if you can help a hundred other people have more quality time with their friends and family, it may well be worth sacrificing some of your own for that.

Expand full comment
Andrew Keenan Richardson's avatar

I don't think any EAs think that you should give up on having friends and family. I think it's clear to everyone that that would be an unhealthy recipe for disaster.

Expand full comment
Josh's avatar

EA reminds me a lot of those who believe that business should maximize its profits and no other objective is morally justified(1). Both are principles with a lot of heft behind them that are also overly simplistic and undermined by a lack of curiosity and humility.

Pursuing profits drives economic efficiency, which does a lot to improve social welfare, even if it also generates lots of negative externalities. But some take it to imply that promoting employee welfare at the expense of profits is unwise without even asking whether “investing” in employee welfare can generate a long-term return. Similarly, pursuing EA would probably improve the world greatly. But it takes incredible hubris to believe those a calculation of the expected harm from AI, based on no data, is correct and a greater potential harm than present issues like the impact of air pollution on health and inequality

What makes both camps seem like “weirdos” is the way that they are so simplistic about the world. The human instinct that things like social ties matter should be respected and those who don’t should rightfully be looked at askance.

1. I’m not supporting the “Friedman Doctrine” that supports profit maximization based purely on the idea that it’s the only way to represent the interest of owners, but the more general idea outlined above.

Expand full comment
John from VA's avatar

It strikes me a lot like the rationalist community. Many of them make good points, and society would benefit if most people moved on the margin towards their stated tenets. However, lots of in-group status seeking, lack of nuance, homogeneity, and a complete lack of humility makes me nervous about fully buying what they're selling. Personally, I'd prefer EA people to donate and encourage others to donate than rum the government.

Expand full comment
Alex S's avatar

EA and the rationalist community are the same people, aren’t they?

(Meta-rationalism ala David Chapman is much healthier.)

Expand full comment
John from VA's avatar

There's a lot of overlap, especially with adherents, but I wouldn't say they're the same, since they're about different things. Personally, I'd describe myself as EA-adjacent, but the rationalist community creeps me out.

Expand full comment
Alex S's avatar

Yeah, I was specifically thinking about Julia Galef, who seems to be Matt’s introduction to EA but is (or used to be) one of the LW style rationalists who thinks you shouldn’t use your brain normally and should instead explicitly calculate Bayes’ theorem on everything, or possibly just pretend you’re doing that and say the word “prior” a lot.

https://metarationality.com/bayesianism-updating

He’s also talked about SSC, who is eh, good on his professional area and isn’t too deep into the religious aspects of that community, but has a strange personal life mission to get everyone to learn about his weird online friends who invented versions of conservatism nobody else believes in.

Expand full comment
Sam Tobin-Hochstadt's avatar

In fact, the big problem in the EA community now is that they've been colonized by rationalists.

Expand full comment
John from VA's avatar

I have noticed that. Was it always the case?

Expand full comment
Sam Tobin-Hochstadt's avatar

No, it started as philosophy grad students eating only rice and beans to give more money to Oxfam because they were persuaded by Peter Singer. I don't really know the intellectual history but I think the shared fondness for thought experiments about the far future plays a role.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Andrew J's avatar

I don't really disagree with any of that, and the first EA stuff I heard about was largely inline with that.

But, increasingly my impression is that the self defined EA group are high on their own supply.

And as a guiding philosophy for representative democracy it makes no sense, "Vote for me as your Rep. and I will assign no priority to you or your community's interests whatsoever. "

Expand full comment
Dan H's avatar

On the other hand "vote for me and I will focus on substantive, evidence-based policies to improve your life rather than empty ideological posturing" sounds like a good sales pitch for a politician.

Expand full comment
Kenny Easwaran's avatar

It *sounds* like one, but I don’t think it always sells. Trump’s election pitch was much more “vote for me and I will ideologically posture on your behalf against the nerds that aim for evidence”. People care a lot about symbolism.

Expand full comment
Dan H's avatar

Somewhat, but I think a big part of Trump's appeal was that he was a "businessman" who knew how to get things done. Obviously in office he leaned much more heavily on the ideological posturing but I think people generally underrate how much people voted for Trump in 2016 because they though he would be effective.

Expand full comment
Marcus Seldon's avatar

Like Andrew, I don't disagree with this, but what bothers me is I think most EAs only advocate for the 10% rule because it's more likely to persuade people to get on board, not because they think only giving that much is sufficient. They would say that you should give up your relationships to focus more on earning to give if you thought it would be sustainable for you. I've explored the (online) EA community, and most of them *are* dyed-in-the-wool act utilitarians. So while I'll happily donate money to Give Well charities, I'm suspicious of the broader movement.

Expand full comment
Andrew Keenan Richardson's avatar

In my experience, most EAs are realistic that doing something like donating 50% of your income is not sustainable or psychologically doable for most people. But consider that there are large numbers of people working for normal non-EA charities who are effectively taking huge pay cuts because they want to do good. A lot of people genuinely want to devote their life to making the world a better place but aren't sure how. It's good that EAs are clear-eyed about it.

Expand full comment
teddytruther's avatar

EA seems like a complete political non-starter. It combines the worst elements of neoliberalism (cold, reductive focus on efficient generation of dollars/hedons) and progressivism (esoteric and unpopular views held predominantly by people with a college degree).

Expand full comment
David R.'s avatar

You forgot the neo-feudalism and pseudo-religiousity.

We’re basically talking about the nobility giving out boons to deserving members of the peasantry in exchange for indulgences regarding how they came to be nobility.

Expand full comment
Nicholas Decker's avatar

You are implicitly viewing wealth accumulation as coming at the expense of someone else. Thankfully, it doesn’t. Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants.

So, something like earning to give is good, *even if you never give!* It is far better to do something productive, that people actually *want*, than to work a personally satisfying but low paying job.

Expand full comment
David R.'s avatar

My other comments at SB offer sufficient clarity on my thinking that I don't feel obliged to explain that I do, in fact, understand the concept of value generation.

"Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants."

But this sentence is sufficient to prove to me that you prefer to exist in some abstracted theoretical realm in which "lobbying" and "rent-seeking" are not concepts which run rampant through the American economy.

The EA people are a mix of professional class types looking for an endorphin hit (fine, better to get it helping poor people than snorting coke, but stop being a preachy fuck about it) and the rich seeking yet another reason to claim their iron hold on the American economy is in fact a good thing.

Expand full comment
David R.'s avatar

Also, in support of my initial response, you are explicitly on-record saying that policies which support a decent standard of living for first world citizens are bad because they take money away from wealthy EAers who will donate it to third world citizens.

I quote: "Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?"

Since you've already taken the extreme-to-the-point-of-parody view and put it on record, I think we're done here.

Expand full comment
Nicholas Decker's avatar

Not so - lowering taxes so that more gets donated overseas is a very inefficient way of doing that. I am arguing that we should have *massively more* foreign aid spending (as well as completely open borders.)

Expand full comment
David R.'s avatar

Nice try.

Here's the whole exchange surrounding that quote:

ME:

EA is noblesse oblige for modern wokeists.

“It’s my duty to earn as much as possible by any means because I have the vision and wisdom to disburse funds according to the interests of the greater good.”

It says nothing of a duty to create a society in which everyone is earning a decent living through the fruits of their own labor, instead of relying on the rich for oh-so enlightened, “targeted”, “optimized,” handouts.

That latter outcome is something only policy and politics can achieve.

FRIGID:

You put the finger on my general unease at WA, but which I couldn't articulate. And this is why I prefer predistribution vs redistribution. Change the rules of the economy such that work is rewarded vs hope the rich toss a few pennies downwards.

ME:

Policies that do that are actively *bad* in the current EA framework because they redirect income from donation-minded rich westerners to middle-income and working class westerners who are, by global standards, already rich.

It’s a terrible, profoundly fucked ethical framework on a macro level.

YOU:

Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?

Again, you're explicitly on-record opposing policies that will improve quality of life for American/European workers because it will redirect dollars from (in your mind) ultra-poor EA donor recipients to rich-by-global-standards working class Westerners.

But so would curtailing rent-seeking by capital in the US, so would regulating crypto effectively, so would introducing a payroll tax-funded universal healthcare scheme...

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
David R.'s avatar

I’m trying to avoid replying to you, but this is just too obvious for even you to fail to understand:

The job of avoiding neofeudalism should not be left in the voluntary hands of the would-be neofeudalists, it should be forced upon them by the citizenry using the power of the state that they’re so desperately attempting to buy off at every turn.

Expand full comment
Johnson's avatar

EA isn't libertarianism, it doesn't imply that the state shouldn't also be redistributive. EAs generally support the standard laundry list of Democratic Party policies, but think e.g. foreign aid and asteroid prevention should be given more money.

Given that, it's only "voluntary" in the sense that all personal morality is "voluntarily." Of course people can choose not to follow it, but then they're not EA in any sense.

Expand full comment
Onid's avatar

And yet, those “neofeudalists” do exist, so wouldn’t it be better if they voluntarily contributed wealth to good causes?

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
David R.'s avatar

EA is dead in the water because it's a thin veil for the bad guys, duh.

Virtually no one gives away enough to crimp their kids' ability to be at the top of the heap, EA is just another of many figleaves and/or outright bribes designed to keep the rest of us from taking it all from them after they die.

Which is, to my mind, an inevitability. It's soon going to be near-universally understood that permitting the intergenerational accumulation of wealth above a level sufficient to, say, generate a few multiples of the median income in passive income, is directly incompatible with democratic governance and the general well-being. At which point "inheritance taxes" are no longer going to be a question of revenue but of self-defense.

EA is yet another attempt to hold back the tide on the part of the very rich and another "self-actualizing" consumption good for the professional classes.

That's all it will ever be.

Expand full comment
Alex S's avatar

EA giving is unappealing for everyone except EA people. It doesn’t make the rich people look good, and it won’t make you look good if you give either.

The poorer people in your local area won’t appreciate it if you donate all your money towards mosquito nets in Africa, after all, even if it brings you down to their level.

Expand full comment
Johnson's avatar

Inheritance taxes have literally nothing to do with this topic. A basic part of EA thought is that you should give away all of your money while you're still alive, because sooner spending is better than later spending and you can't control where it goes when you're dead.

Expand full comment
User's avatar
Comment deleted
May 24, 2022
Comment deleted
Expand full comment
Stuart's avatar

I think you're viewing EA too rigidly. I don't think EA is a few people or institutions or a fixed approach to problems (cold, reductive), it's an idea about how to do the most good. I think EA will have different messages and approaches to different problems.

The approaches to get individuals to donate to charities is going to be different than the approaches by EA politicians to get votes.

No one knows the exact, optimal formula for any of this, but I think EA generally favors testing different things and going with what works best.

Expand full comment
Graham's avatar

I think most people are smart enough to understand it is good for wealthy people to give money to the poorest people who need it most. I think that basic messaging could get you buy-in from a good chunk of the public. I don't know about going into all of the moral philosophy background with everyone, but I don't think that is necessary to understanding the most basic idea here.

I think the biggest problem is getting people to not fixate on the more far-fetched long term problems EAs try to address, like rogue AIs, and to focus more on work like de-worming people in third-world nations.

Expand full comment
staurofilax's avatar

I would argue that of all the potential threats to humanity that we are currently aware of, generalized AI is the biggest one on the near term horizon. It scares the shit out of me and it’s tough to conceive of the fact that that the people who are rushing to create the first GAI are fully aware that by completing their goal, they may destroy humanity.

Expand full comment
Graham's avatar

Oh I don't disagree it could be a problem, I just think it is a messaging loser for the EA community for most of the public. I can just hear progressives saying "they just want to give money to tech cronies while many Americans struggle" or something like that, and conservatives just laughing saying "like terminator? I'll be back... LOL good one"

Expand full comment
Alex S's avatar

Nobody is working towards AGI, current ML research is not capable of achieving one, and if you did it wouldn’t do anything to humanity. You can create some intelligence right now with your wife if you want though. It’s rather hard to get the result to do much of anything.

The rationalists are afraid of “super intelligence” because “intelligence” is what they value in themselves and they don’t want to not be special. But they haven’t defined “intelligence” (it doesn’t mean you can do whatever you can think of) and taking over the world is limited by other things that can’t be avoided, like entropy.

Expand full comment
Johnson's avatar

If you're going to sell EA to the public, it has to be in very simple principles, if for no other reason than that the average voter is not nearly smart or interested enough to understand it. Just like Democrats don't campaign on Rawlsianism, even though a bunch of Democratic policy types are Rawlsians.

Expand full comment
Mark's avatar

Agreed. I got a good chortle imagining a political candidate trying to explain the trolley problem to American voters.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
Alex S's avatar

Leftists hate it when rich people do this because they see it as a representation of power, the only thing they care about. I don’t know if it’s occurred to them that giving away money causes you to lose power.

They’re generally opposed to charity, except of course they still want to do it, so they solved that by calling it “mutual aid” when they do it.

Expand full comment
teddytruther's avatar

A political platform that says "Just let rich people do what they want with their money, and hopefully they choose to do nice things!" isn't effective altruism, that's just standard issue libertarianism.

Any semi-serious attempt to implement EA as public policy would involve using the taxation power of the state to direct redistribution to areas of need. That wouldn't necessarily have to be via direct redistribution - you could also re-engineer the tax code to strongly incentivize charitable giving to specific, high yield causes. I suspect I'd be in favor of much of that platform! But it would also be a major uphill political climb unless the EA agenda was heavily modified by pragmatic/popularist considerations (e.g. domestic over international, present over future, minimal discussion of weird X-risk stuff) - at which point it begins to become pretty similar to the present panel of technocratic redistributive public policies favored by the neoliberal / state capacity libertarian wonk class.

Expand full comment
Chris Brandow's avatar

I think the basic premise of EA is wonderful, but it's the "weirder" stuff that's a problem. Every time I hear people "in the EA space" talk about the kinds of problems they are worried about, it very often goes off into esoteric topics such as AI, balancing future concerns, and alleviating the suffering of animals in the wild. And I just shake my head at that point

Expand full comment
David R.'s avatar

The discount rate is not meant to say “these people are less valuable”, but rather to say “how the fuck are we to know anything about our descendants 100 million years from now.”

A bit of epistemological humility is a good thing, not that I’d expect the EA crowd to admit that for even a moment.

Expand full comment
John from FL's avatar

I think many people need to internalize something my father once told me, when I was a know-it-all college student: "Just because you are smart, doesn't mean everyone else is dumb."

The level of hubris displayed by people who KNOW all the solutions to any perceived problem without fully grappling with decisions and values made over thousands of years is fully evident in the EA movement. They bring an interesting framework to an important question, but their smugness and air of superiority is off-putting.

Expand full comment
City Of Trees's avatar

And this ties back to my hobbyhorse of that smugness being right there in the name, declaring their subjective set of pet causes to be objectively "effective".

Expand full comment
Paul's avatar

The Pro-life movement has been incredibly effective. Perhaps it can get done EA donations! Humor aside effectiveness assumes a valuation which depends on a set of values. You can only have EA conditional on shared valuation. To staunch conservatives, abortion is a horrific assault on human right/life, therefore prioritizing pro-life causes is the most effective allocation of resources.

Expand full comment
Kenny Easwaran's avatar

Even if one grants that abortion is a horrific assault on human right/life, it's not at all clear that prioritizing pro-life causes is the most effective allocation of resources. Just like someone who thinks that fentanyl is a deadly and destructive drug doesn't have to think that prioritizing prohibition is the most effective allocation of resources.

Many of the latter people think that the most effective allocation of resources would be *legalizing* opiates and using the funds that had been spent on enforcement on free treatment programs to get people off. Similarly, people who think every abortion is a tragedy might focus on getting people access to birth control to cut the number of abortions that occur.

The pro-life movement has been effective at electing politicians and possibly even getting bans passed. It's not clear that they've been as effective at reducing the number of abortions that happen.

Expand full comment
Paul's avatar

I think the actual pro-life movement is organized more like you have in mind. There is a political element, but it has spent most of the last 20 years on changing minds and restricting access. There is also a large effort placed in pro-life women's clinics often placed near planned Parenthood clinics. There are maternity homes run by pro-life groups. I've read article after article off pro-choice folks complaining about how effective pro-life folks have been at restricting abortions.

Expand full comment
Kenny Easwaran's avatar

Effective at restricting is not obviously the same thing as effective at reducing the number. Alcohol prohibition and marijuana prohibition were far more effective at making it hard to get alcohol or marijuana than they were at cutting down on the number of people consuming them.

Expand full comment
KetamineCal's avatar

As I said above, many of us have been the overenthusiastic, bright young person at work that gets ignored. Even though things play out exactly as predicted and the solution is exactly as described, nobody likes or trusts a know-it-all and some prefer the misery they've already adapted to than risk one they haven't.

Expand full comment
Jacobo's avatar

I've been that person, and how I felt was that it wasn't my brightness that got me to understand what was going on, it was just that I happened to be closest to the problem and was dealing with it everyday. When I laid out my thoughts, I found people were reluctant to even ask follow up questions, they just wanted to pass down their own judgement from thinking about it for half a second. Not sure who the EA's are in that scenario.

Expand full comment
David R.'s avatar

Ehh, they’re close enough “completely wrong” that I find their inability to win an argument with an empty room without talking down to it to be comforting.

Expand full comment
Kenny Easwaran's avatar

They are very aware of this. One of the central philosophical problems they are interested in is the problem they call “cluelessness”.

But on the main point, you should read Frank Ramsey’s classic paper on the mathematical theory of savings. He notes that everyone tries to apply a discount rate to make the unbounded was of the future disappear, but this is morally unjustified. Everyone here is aware of uncertainty, but they put that in the probabilities not the discount rate.

Expand full comment
Peter S's avatar

But uncertainties compound over time... like a discount rate

Expand full comment
Kenny Easwaran's avatar

There are definitely many ways in which discount rates can be used to simulate some types of uncertainty. But if you're going to do that, then what is gained by doing the simulation, rather than just plugging in the uncertainties?

Expand full comment
David R.'s avatar

IMO, there are no meaningful "probabilities" for a time horizon past, say, half a century. Not below the level of geological and astrophysical processes, anyway.

Speculating regarding human behavior and the "state of the world" is mostly blind-ass guesswork within that timeframe and entirely blind-ass guesswork outside of it.

Better to say "this uncertainty should be expressed as a discount rate to avoid overconfidence" than to try to arbitrarily stab at assigning probabilities, which is just overconfidence expressed differently.

Expand full comment
Kenny Easwaran's avatar

How does expressing it as a discount rate avoid overconfidence? If it’s about uncertainty, express it as uncertainty. When you express it as a discount rate you are using a tool for a purpose other than what it is naturally suited for. A discount rate of 2% is implicitly building in a 2% chance per year of some event that cancels out all utility differences between outcomes (eg, existential risk). But if you think that using a probability to do this is just a way to be overconfident, why isn’t the discount rate equally building in overconfidence?

Expand full comment
David R.'s avatar

Me attempting to assign probabilities to, say, 50 potential causal factors over a 50 year timespan requires me to exert my (fundamentally flawed) human judgment 2500 times. I am wildly more likely to reach an incorrect result as a result of systemic biases in that case than if I were to simply say "no idea, let's call the discount rate due to uncertainty 5%".

Even though the latter is quite literally pulled out of a hat based on a gut feeling.

All the compounding uncertainties are random, whereas my attempts to guess them are virtually certain not to be. It is better, in such circumstances, to make fewer judgment calls.

Within a year or two, yes, it would probably be better for a reasonably well-informed person to evaluate some probabilities they understand well rather than just pulling something out of their ass. Outside that timeframe and outside of things we understand... not so much, IMO.

Expand full comment
Kenny Easwaran's avatar

I think this is all good motivation to do a robustness analysis on your estimates of probabilities, and perhaps a simplistic thing like 2% probability of extinction per year (which is what a 2% discount rate simulates) is useful to have as one of those estimates.

I don't understand how that is an argument that you shouldn't even consider probabilities, and should only use discount rates.

Expand full comment
David R.'s avatar

Every time I've seen the phrase employed, "robustness analysis" boils down to "let me assign probabilities to describe the possibility that my assigned probabilities are incorrect."

It's all turtles from there down. :-)

Thus, I subscribe to the "passive investor" theory of predicting the future beyond maybe a few decades: don't try. To me, a 2.5% discount rate simulates the reality that I have no real idea what the world will look like in a century and shouldn't attempt to make plans or judgment calls on the basis that I do. It's not saying "we won't be here" it's saying "I have no possible way of knowing what will enhance people's well-being at the end of that time."

About the only reliable way I have of reliably enhancing people's well-being in a century's time is to vote to solve the problems which exist in plain sight now. I don't need weird, overly-ideological moral frameworks for that, especially when they're explicitly designed to nudge me in the direction of accepting vast accumulations of wealth in the hands of people who vaguely promise they'll give it away someday soon and *definitely* not hand it to their kids when they die

Expand full comment
Marc Robbins's avatar

>>Even though the latter is quite literally pulled out of a hat based on a gut feeling.<<

I miss the old New Yorker "block that metaphor" jokes.

:-)

Expand full comment
David R.'s avatar

Touché.

Expand full comment
Allan Thoen's avatar

A lot of confusion seems to come up because, the way people discuss the discount rate, it is being made to do double duty -- to discount for the value we assign remote people we don't know relative to those close to us that we do know, and for the uncertainty of predicting the future.

You're right those are two separate concepts and should be kept distinct, whatever the terminology used is, even if the end product is called a single unified discount rate that includes both.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
David R.'s avatar

Ok.

I care whether they provide covering fire for the people fighting to allow the unlimited intergenerational accumulation of wealth by burning democracy to the ground.

You can devote as much or little to private charity as you care to and persuade others to do likewise, but I will be damned if I accept the argument that people should be attempting to earn as much money as possible so they can give it away to the needy.

Because that "earning" almost without exceptions comes to take the form of rampant rent-seeking and lobbying that harms the ordinary people they live next door to. And that's ok, because those working class folk are rich by global standards, and they've made the vague promise that they'll give the proceeds to poorer children in nations on the other side of the globe.

It, like basically every other philosophical movement to come from Silicon Valley, is a mirage aimed at self-justification, not a genuine approach approach to the job of building an equitable society for all.

Expand full comment
JCW's avatar

This post reconfirms my sense that EA is, to a first approximation, a faith community with all of the virtues and vices that being a religion entails. They have the capacity to do a lot of good charitable work. They also have a lot hand-wavy self-justification for why wealthy, fortunate, and privileged people should feel great about themselves. And they have a really compelling set of stories about how someday the world ends because the AI serpent is unleashed into the garden and brings the apocalypse.

Expand full comment
Johnson's avatar

You're kind of right that EA's attitude to wealth mirrors religion's, but for the exact opposite of the reasons you give. EA *does* have similarities to Christianity, but one key one is that *both* Christians and EAs tend to think that the vast majority of rich people are bad, and that if they were actually moral, they would give all of their money away.

I do agree that socially EA stuff plays the same role as religion in many people's lives, and in some respects EA theorists reinvented arguments that many Christian missionaries focused on foreign aid use. The evangelical megachurch I grew up in tried to be maximally efficient in its local operations so that it could give as much money as possible to its efforts in Africa.

Expand full comment
tardigrade24's avatar

I wonder if various schisms will emerge in the next few decades, or if EA will have petered out before then. EA extremism would also make the 2040s very interesting.

Expand full comment
User's avatar
Comment removed
May 23, 2022
Comment removed
Expand full comment
JCW's avatar

lol...the fact that you have this reaction (and the other one, below) basically tells you everything you need to know about the strength of my analysis.

Expand full comment
Casey's avatar

I have a hard time squaring a 0 future social discount rate when the justification for applied consequentialism rests on abandoning the artificial certainty of trolley-problem style scenarios.

I think there exists some future social discount rate that can be derived in part from the fact that things get more uncertain the further into the future you go. We should weight that which is more certain over that which is less certain, and on a sufficiently far out future horizon, that means we're weighting present or nearer-future concerns over more far future concerns.

I could have just talked myself into a circle here, since the 0 discount rate folks could just say that all I've done is describe using probabilities to drive decisions. So maybe it's just my being-alive-now squeamishness expressing itself.

Expand full comment
MagellanNH's avatar

My sense is that Discounted Cash Flow analysis (DCF) isn't very useful on problems where the future uncertainty (eg time/discount rate) is very very large. The estimation errors compound to the point of absurdity.

Using DCF for this class of problems is sort of like trying to set the gap on a spark plug with a ruler. Even though rulers can be great tools, in this case it just isn't that useful for the job at hand.

I'm not sure we have a good tool, but that's not a reason to use a tool that only offers garbage answers.

Expand full comment
Paul's avatar

At best discount rates should be treated as a local approximation for time substitution. Anything beyond that is confusing theory with the real world. If you knew with certainty the world was ending tomorrow, you'd spend all our resources today. In that framework you'd spend not effort in "saving the future". At 99% chance of catastrophy, same thing happens... spend all today and a very high discount rate. Basically you'll measure high discount rates in high uncertainty environments. Survival and discount rate are endogenous in these sorts of problems, so you need to use a public good framework, probably model these issues as coordination games not discount rates.

Expand full comment
N. N.'s avatar

I think this is totally right and it is why the usual line is "0 rate of *pure* time discount"

Expand full comment
Allan Thoen's avatar

In theory, shouldn't the discount rate for actions that might benefit future people (members of the "community of the future") be the same as the discount that members of one community should apply to actions they might take right now, in the present, for the benefit of members of a different community and to their own detriment, except with an additional adjustment for the added uncertainty of predicting the future?

It seems like getting a bit ahead of ourselves to argue about future discounts when we can't even agree about cross-community discount rates in the present.

Expand full comment
Kenny Easwaran's avatar

The standard strict utilitarian answer is that you don’t discount other groups and you don’t discount the future.

Expand full comment
Eli's avatar

But I think you're right – surely the future discount rate inevitably depends on whether humanity in the year 3000 consists of 10 billion people or 100 billion people? It seems hard to justify not altering how you weight the interests of future people along with how many of them you expect there to be.

Expand full comment
Kenny Easwaran's avatar

The idea is that in fact, you do not weight the interests of future people depending on how many there will be. Each one counts as a full person with the same significance as you and me, regardless of when they live or how many of them there are. That means if there will be 10 billon people, then trade offs between the present and future are pretty balanced, but if there will be 300 billion people, then we ought to be working a lot harder and even making sacrifices to make the future better.

Expand full comment
Marc Robbins's avatar

I'm imagining myself as a fairly rich person in 1922 thinking about the grinding poverty that the vast majority of people in East Asia are living in and how I could use what wealth I have as an EA person to best insure that their lives are at least somewhat better in 2022.

I guess the answer would have been to use my money to make sure this 18 year old Deng Xiaoping fellow has enough resources to survive.

In other words, predictions are hard, especially about the future, and that long-term altruism, while very nice, may be infinitesimally important compared to other things that would be good to happen -- like widespread economic growth.

Expand full comment
David R.'s avatar

Whereas, in reality, vast sums were spent on missionary activity to "Save China" and somewhat smaller ones on lobbying in DC to get the federal government to "save China".

The latter was somewhat useful in that it provided the impetus for the US to stand up to Japan before it overran East Asia. The former... lol.

I'm quite confident, verging on 100% confident, that 95% of the crap the "long-termists" want us to worry about is pointless, if not outright harmful.

Expand full comment
User's avatar
Comment deleted
May 24, 2022
Comment deleted
Expand full comment
David R.'s avatar

I am aware of the history of the Qing Dynasty in the 19th century, thank you. I am curious where you saw the 200,000 figure for missionary donations, it's not one I've heard before.

Nonetheless, you misunderstand my point; I am saying that long-term plans of the "effective altruists" of the day turned out to be so much smoke and mirrors. Not only did they focus on evangelization/saving souls over genuine efforts to help the Chinese people, but they fundamentally misunderstood almost everything about the shape of the next century, backed the wrong horses at every turn, and made virtually every decision wrong.

I have no confidence, no offense meant, that you're doing any better in deciding what to give to and what is and isn't important.

It's far, far more important to my mind that we structure the rules of market economies in such a way that they offer the greatest chance for their participants to fulfill their potential, including making expensive investments and engaging in speculative R&D, curtailing the rent-seeking and predatory practices shot through our whole economy, and generally tilting the playing field towards labor over capital. Even though labor won't donate extra income to EA-approved charities.

When I see a significant number of EA'ers trotting out the libertarian "I can spend my money better than the government can", I know beyond a doubt that it is for those people another excuse why they shouldn't pay taxes or subject themselves to "burdensome regulations".

Expand full comment
Owen's avatar

I first encountered SBF through the Odd Lots interview, where as Matt Levine summarized he basically said “I’m in the Ponzi business and it’s going pretty well for me.” Crypto is unraveling in the public eye as being *in its current manifestation* mainly a huge bubble full of speculation and fraud. Maybe the huge injection of capital into the space ends up actually creating some products with real-world use cases for normal people, but right now it’s mostly self-dealing, hype and hand-waving.

Having that associated with EA in any way is unbelievably toxic to EA. And if we take consequentialism seriously as Matt suggests here, that means it’s bad for crypto money to go directly to EA candidates.

I also want to note that the way SBF “tried” to elect Flynn was just unbelievably amateurish. “Parachute into a local race and shower unknown candidate with your shady millions” is just awful political strategy. It’s also super short-termist!

Plus, “how do I get my unpopular political/economic beliefs written into law” is not uncharted territory - you can just look at the success of the conservative legal and economic movements over the last 70 years or so. They used a long-term approach, building networks, taking over institutions from the bottom up, and now have captured seemingly most of the US court system and many of its legislators.

The main problem with EA is that it has a very small and rabid base and basically no support or even awareness outside of that. The playbook for taking political power when your base is that small is basically “do a coup”. Which I don’t think the EA folks want but maybe that’s their utilitarian calculation?

Expand full comment
Johnson's avatar

Breaking into elected politics is clearly going to be really hard for EAs. My understanding is that up to this year, it's had more of an emphasis on getting people into important bureaucratic roles, which is easier.

Expand full comment
tardigrade24's avatar

EA's efforts, as far as I can tell, only distribute massive amounts of money in one way or another, to charities, "AI research centers", or to Flynn's political campaign. There's no mechanism for creating effective institutions from scratch, and until EA comes up with something like that the organization will be limited in what it can do. (this problem probably rules out coup-plotting as well)

Expand full comment
Allan Thoen's avatar

"We are not 100 percent bought in on the full-tilt EA idea that you should ignore questions of community ties and interpersonal obligation"

I'm glad you're not bought in on this, because this idea is the feet of clay of the entire EA edifice. From a public policy perspective, none of the EA ideas amount to a hill of beans unless they're embedded in the legal system of a specific community and society. Which means they have to, first and foremost, serve the interests of the members of that community and have local legitimacy. It's fine to talk about how noble it would be if we all gave away our possession to those more needy. But the meta-consequentialist approach to fads like EA has to always remember that the real-world track record of trying to put consequentialist ideas like that into practice, in their pure form, had tended to lead to some pretty bad, totalitarian, Animal Farm-like outcomes.

Expand full comment
David Abbott's avatar

Winning elections is far from the only way EA ideas can gain purchase. Private charity is an easier path. If a handful of rich EAs do great things and one of them becomes Person of the Year or wins the Nobel Prize, that could spur other private actors to effective giving. Enough wealth is privately held that getting a significant number of very rich people to be effective altruists could make a big difference.

Expand full comment
User's avatar
Comment deleted
May 23, 2022
Comment deleted
Expand full comment
City Of Trees's avatar

Another fun argument I'd get popcorn out for is whether it's better to create charitable trusts obligated to causes deemed righteous, or whether it's better to just liquidate it and give it all away immediately.

Expand full comment
CarbonWaster's avatar

Yes, it's important that what might be easy-ish in one context (charitable giving) might not be in another context (democratic party politics). The key to this is that politics is the exercise of *power*, and elections the way we decide who gets to handle that power and in what proportions. One thing people can do with power is use it to reward themselves and their friends; noting that this is short-sighted or suboptimal from a consequentialist perspective does nothing to make it less appealing. The challenge for EA types is that they need to find ways to show a plurality of voters that helping others should be a *priority* not just a nice-to-have and I am very skeptical of the prospects of that.

Expand full comment
Neva C Durand's avatar

I thought crypto was terrible for climate change. Doesn’t it kind of defeat the purpose of longtermism when the thing you do to make your money is actively harmful?

Expand full comment
Michael's avatar

they have a semi-decent answer to this. the cryptocurrencies Bankman-Fried is the biggest promoter for don’t really use inordinate electricity (different system than Bitcoin).

while the exchange does bitcoin the only carbon impact is moving money in and out, so it’s not as bad as might be expected. and they buy non-fake carbon offsets.

Expand full comment
Aaron Erickson's avatar

Yeah, my guess is that someone like him would be behind the efforts for ETH to move to a proof of stake system which greatly reduces the need to mine.

Expand full comment
KetamineCal's avatar

Would not be shocked if someone's already working on a cryptocurrency that somehow incentivizes pro-environmental measures. I have zero crypto and am entirely turned off by the scene but it does seem to overcome issues with early adoption.

Expand full comment
City Of Trees's avatar

Seems to me that just investing in clean energy ventures gets directly to this goal without crypto being a middleman.

Expand full comment
KetamineCal's avatar

Fully agree. I still don't know what crypto will ultimately be used for. But I think it has the ability to function like Kickstarter. But that's hardly visionary.

Expand full comment
Kenny Easwaran's avatar

I think the more interesting thing is that it can function like Kickstarter, with additional features like quadratic funding: https://vitalik.ca/general/2019/12/07/quadratic.html

That is, if there's a bunch of people that all want to support a similar project, but have some disagreements about the details, they can chip in various amounts of money, and let the details be sorted out by voting on the basis of the square root of the amount of money each person contributed. The reason for doing it at the square root is that this way, the marginal vote a person is willing to cast will put them at a level of influence that is proportionate to the amount they are willing to spend on a vote.

This may not be the best particular implementation or structure, but enabling self-executing voting systems to be built into contracts seems interesting.

Expand full comment
Alex S's avatar

People invent concepts like Gridcoin here, but there’s no mechanism that makes sense - they invent a currency and prove they won’t create too much of it, then give it to the people doing good, but that doesn’t make it worth anything.

It also can be discouraging when you can’t earn any of it as an individual because someone is running BOINC on the work computers. Better to make clean energy so cheap it makes your power bill go down.

Expand full comment
KetamineCal's avatar

Yeah, it still seems like a solution in search of a problem thusfar. Might just end up functioning like Uber, breaking some monopolies that the political process is poor at breaking.

I was interested in HNT for a bit because it actually serves a purpose but that just got pounded like everything else.

Expand full comment
Marc Robbins's avatar

However, by making all crypto currency more popular, I suspect FTX is contributing to the negative environmental effects via knock-on effects, even if it might not itself directly promote the more damaging currencies like Bitcoin.

Expand full comment
Binya's avatar

Not if the benefits of how you use that money outweigh the costs. That's very explicitly stated in Matt's article, he cites murdering people in order to harvest their organs.

Expand full comment
smilerz's avatar

Proof-of-work crypto has a CO2 impact roughly on part with Sri Lanka and is considerably smaller than plain-old-banking. Much (most?) of current development isn't making proof-of-stake as robost as PoW.

Expand full comment
Lance Hunter's avatar

My biggest gripe with EA is that they often assume to have significantly more knowledge of the consequences of their actions than they could actually have. It would be like if someone was was in a trolly-problem scenario, flipped the switch to kill just one person instead of five, and only later realized that there were actually three people on the new track (and only one on the original track).

Earn-to-give also has its problems. First off, there are some professions that will just do more harm than good, no matter how much money the people doing them donate to charity. If El Chapo or Mohammed bin Salman went full-EA and donated the billions they earn to charity, that wouldn’t necessarily mean that their overall effect on the world becomes positive. Second, it’s a bad case of simplistic thinking to just assume that more money going in to charities will automatically translate directly into more good being done. If earn-to-give were a categorical imperative and expected of everyone, then who would actually staff these charities that are supposed to turn dollars into things that benefit the world? The most effective workers would always choose the work with the highest pay, work that might be doing far more harm than the good that less-effective workers can cause using their donated funds.

Expand full comment
Kenny Easwaran's avatar

80,000 Hours is the EA career advice site. They discuss these issues. They stress that you should find your own comparative advantage, which at least in part involves finding out what needs the community has. Some people can help more by becoming doctors, some help more by becoming academic researchers, and some help more by earning to give (but not by strictly maximizing their earnings, if those earnings come at a cost).

Expand full comment
THPacis's avatar

That sounds nice… really like the age old arch-conservative mantra of “everyone has their place… some were born to be lords some servants but all have their place and importance…” seriously, who exactly reading the EA stuff would understand it to mean that s/he specifically is not of the select few for whom, it just so happens, it is socially optimal, and morally imperative, to become super rich ?

Expand full comment
Kenny Easwaran's avatar

If you live in a market society, you’re already in the situation where you will get rich if you have the right sort of comparative advantage and won’t if you don’t. The EA idea is just that there might be ways to make use of that.

Expand full comment
THPacis's avatar

That's an oversimplification. It's also not necessarily true. In a social democratic welfare state – not a Utopia but a reality in much of Europe – the wealth gap is much smaller, and it is rather the state, via transparent and democratic mechanisms, that redistributes wealth for good causes, rather than the pet fads of unelected unaccountable individuals. Wouldn't that be a better system? At the end of the day EA sounds like a newly-packaged way to sell laissez faire capitalism and libertarian ideology (with some prosperity gospel undertones) to a 21st century crowd (esp. one that is highly educated in quantitative fields but with little knowledge of history or comparative politics).

Expand full comment
Kenny Easwaran's avatar

Yes, that is likely a better system. There are plenty of European Effective Altruists who earn to give under that system as well (though probably the balance is slightly higher for people to work on directly helpful projects).

I do think it's a fair criticism of Effective Altruism that it tends to take this political-economic background as given rather than acting to change it, but I think the main claim is that changes to such a system have such wide-ranging consequences that it is in fact very hard to figure out whether they are net improvements. But by the same token, they definitely *don't* try to get European countries to move to a more Anglo-American laissez-faire system, the way your caricature would have it.

Expand full comment
Tran Hung Dao's avatar

> Earn-to-give also has its problems. First off, there are some professions that will just do more harm than good, no matter how much money the people doing them donate to charity.

Writing this makes it sound like you know absolutely nothing about EA and are just making up strawmen to argue against on the Internet.

In both *Doing Good Better* (William Macaskill) and *80,000 Hours* (Benjamin Todd) make it clear that they're talking about net impact. Not whatever bizarre non-net accounting you're claiming they use.

> If earn-to-give were a categorical imperative and expected of everyone, then who would actually staff these charities that are supposed to turn dollars into things that benefit the world?

Again, that's not remotely what they write.

Also, I can't believe you just claimed that a bunch of consequentialists are going around decreeing categorical imperatives...

Expand full comment
Dan H's avatar

If consequentialism were deontology it would be wrong, therefore deontology is right. Get it?

Expand full comment
Nicholas Decker's avatar

Mohammed bin Salman has done nothing of value. He is a good example. However, El Chapo has, in fact, done a ton of good for the world! He has provided what people want. Being a drug kingpin is a good thing, for otherwise we’d have no, or at least much less, crack.

Expand full comment
Kenny Easwaran's avatar

This depends on an assumption that market demand for crack is good evidence for value created by access to crack. There’s good reason to think that for things like addictive substances and misinformation, market demand is a bad proxy for value created.

Expand full comment
Nicholas Decker's avatar

Addiction is simply a term for wants we disapprove of. Consider this - most people cannot bear to be without their family. If we were to be separated from our family, we would go through a set of severe physical and psychological symptoms - we call this “grief”. In the same vein, many people feel they cannot go without certain drugs, and if they can’t get it, they have severe physical and psychological symptoms. Of course, we call these “withdrawal symptoms” and the very want for the drug an addiction.

Perhaps you could say that, if you had never used the drug, you would never want it. Of course, if you never meet your spouse, you’d never grow attached either.

Seeing addiction as a disapproved want clarifies a lot of things. It explains how one can be addicted to television, sex, video games, gambling, alcohol, and all manner of other drugs and disapproved behaviors, but never addicted to family, friends, pets, or to being a pleasant and well-liked person.

Expand full comment
Kenny Easwaran's avatar

Addiction is a term for wants *the person involved* disapproves of, not just that others disapprove of. And usually, the term is used not just when the person disapproves of their own want, but specifically when that want ends up causing them to do badly at achieving other wants they judge to be more important. That’s what gives the concept of “addiction” it’s moral significance of enabling us to discount these wants - because the person who has the wants has reasonably judged other wants to be more important, and yet this want is getting in the way of those more important ones.

Expand full comment
Tokyo Sex Whale's avatar

That characterization is the best reason to understand addiction as a disease or impairment: preventing a person from functioning at their full potential rather than looking at addiction as something that can be treated or cured. That's not to say that "treatment" can't help.

Expand full comment
Alex S's avatar

This seems like a limit to rationality. Of course utilitarianism says it’s good to give drugs to drug users - they’re utility monsters!

They don’t rationally want more drugs; what you’re giving them is the physical concept of “want”, so it turns them into wanting it.

Expand full comment
David Abbott's avatar

I’ve never found the trolly problem difficult. Anyone who values his own “integrity” more than four lives is a sanctimonious asshole. I’m a consequentialist but, like most people, I’m somewhat selfish.

I could stop drinking wine, take my kid out of private school, double my work effort, and spend the proceeds of these changes on mosquito nets for poor strangers in Africa. I simply don’t care enough about other people’s happiness to do that, nor has my study of consequentialist ethics suggested why I should care about hedons in other people’s’ brains as much as my own. My brain is the center of my world!

I want to know more about the lifestyles of rich effective altruists. I can totally understand getting more pleasure out of buying mosquito nets than buying a yacht and a private jet, I might do the same if I had the means. But I would never give up interesting food or a comfortable dwelling to buy mosquito nets. How many people actually do that? Are effective altruists more or less ascetic than medieval Christians? Deep insights into human nature are available to those who collect the data.

Expand full comment
Aaron Erickson's avatar

I can tell you an answer that you probably already know... almost nobody in EA likely lives an overly spartan life devoid of hedonic pleasures. That said, I also don't know that they go out and demand others life in any sort of minimalist way to be considered "pure". Ih they did, their entire movement would collapse in a cloud of hypocrisy.

Expand full comment
Kenny Easwaran's avatar

An important thing for consequentialists is that hypocrisy is generally not a harm in itself. Consequentialists back to Peter Singer all say “I should be doing more than I actually am”, but the point is that it’s better to do more good. Anyone who focuses on the hypocrisy will just cut back on their ambitions to do good, so that their ambitions and their actions match. But that gets things precisely backwards.

Expand full comment
A.D.'s avatar

I'm reminded of "A Thousand Splendid Suns" where one character wants to help but doesn't want attention and ends up doing almost nothing to help a girl who could really use the help and kind of looks down on another character who donates ostentatiously but actually makes her life much better.

Expand full comment
Johnson's avatar

This is another of EA's similarities to Christian theology--everyone is actually bad, most importantly yourself, and the most you can realistically hope for is to minimize your badness!

Expand full comment
Kenny Easwaran's avatar

I think it's unfortunate that so much of moral philosophy has focused on the deontic concepts "right" and "wrong", which in a consequentialist framework end up most naturally being translated as "the best thing at all possible" and "literally anything else". I think this particular interpretation shouldn't be any more natural than the opposite one, where literally the only wrong thing is the worst thing at all possible, and everything else is right.

If we don't use the absolute words "good" and "bad" and only use the words "better" and "worse", it avoids this consequence. None of us is doing the best we can, and we can always do better, but we're also all doing better than some other things we could do, even literally Hitler.

Expand full comment
David Abbott's avatar

Like^^

Expand full comment
Dan H's avatar

I don't think the point is necessarily that you should deny yourself all comforts and pleasures so that you can give all money in excess of what you need to feed yourself to charity. It's just that on the margin for most well-off people in the developed world you can actually give away a substantial amount of money without affecting your quality of life at all. And that money can alleviate a tremendous amount of suffering in the developed world. To put it another way, imagine you are living a comfortable life and then you get a 10% raise at work. You can give that money away and help a lot of people or you can use to buy some stuff you were perfectly happy without already. You may get some pleasure from the extra consumption on the margin and if you literally care nothing for other people then I guess that is rational. But I think most people aren't like that. They actually do care about the well-being of other people a non-zero amount.

Expand full comment
Marcus Seldon's avatar

One problem I've long had with consequentialism is that, in practice, it's pretty easy for a smart person to make a plausible consequentialist argument for a wide variety of actions. This makes consequentialism slippery and hard to argue against, and also opens up room for motivated reasoning.

The focus on AI risk is a good example. A bunch of nerdy people who are really interested in technology and analytic philosophy get together and form a consequentialist charitable movement. At first, they decide that donating large sums of money to global public health charities is best. But that's a boring basis for a tribe, even if a noble one, and means its members are at best just funding the good works of others. But then the EA movement collectively decides that the most important issue facing humanity is actually AI risk, which means that nerdy programmers who are into analytic philosophy not only can but are morally obligated to save humanity by thinking about philosophy of mind, rationality, and computer science. That seems like a much more fun basis for a community, but it seems fishy that that is where things landed, doesn't it? I'm not saying anyone is acting in bad faith, just that we're all flawed humans who are prone to motivated reasoning.

(Don't get me wrong, I'm partly persuaded by arguments for AI risk myself, but I am suspicious of how much it is elevated relative to other issues in EA circles)

Expand full comment
Aaron Erickson's avatar

It is a sad but predictable part of our political climate that someone like SBF is seen as self serving. It reminds me of how YIMBYs get accused of being shills for "evil developers" by NIMBYs when 99% of the people in the movement literally just want more housing, and probably over half themselves already have secure housing and therefore no real personal motivations.

I don't know SBF, but I know a few people who were crypto enthusiasts early who did *very* well. A few did the predictable things and became douchebag showoffs, but most quietly sold off their crypto holdings and now largely do philanthropy, but through organizations that make effective use of the money.

Seems like a thing that often, but obviously not always, happens with new money.

Expand full comment
Matt C's avatar

Some number of EA believers should take their talents to local governments, particularly midsize to large urban governments, which often have the ability to execute meaningful solutions. Seems like EA lives in the philanthropic world, which can be OK, but they are missing out on change opportunities in local (not state or federal) government.

Expand full comment
Kenny Easwaran's avatar

That’s what Carrick Flynn was trying to do! There may be some more effective ones that haven’t been as open about it.

Expand full comment
N. N.'s avatar

He was running for state-level office, not local office. Maybe it is what he should have done (I'm pretty sympathetic to that view), but as I understand it, it is not what he actually did.

Expand full comment
Kenny Easwaran's avatar

I misread! I just saw the discussion of government. I do think that local governance, including things like water boards and utility boards and school boards, not just county supervisors and city councils, have a lot of important power that could be used for good, rather than just trying to start at the level of federal congress.

Expand full comment
KetamineCal's avatar

I'd imagine that a lot of wealthy EAs don't live in places where holding local office would do the most good. But state legislature could be just right.

Expand full comment
Mark's avatar

EA is good intellectual fodder. And I agree that we need more coordination at the federal level to deal with issues like future pandemics, climate change, and AI. But using EA as a philosophical foundation for governance is not going to generate positive electoral outcomes. It is ungrounded in both American history as well as human nature. The only circumstance under which I could see it having a decent shot is if the electorate were comprised of MENSA members and Less Wrong contributors.

Expand full comment
John's avatar

The intellectual fodder is pretty key though. I don't think the Democratic Party should officially align behind an EA-centred platform. But people who want to steer the country should be obliged to intensively consider the trade-offs Matt discusses here in his last paragraphs between policy & voting as 'righteousness', and their dry, predictable impact on real things happening in the world, and whether you like those impacts or not.

Expand full comment
lindamc's avatar

I like (and "liked") both Mark's and John's points here about intellectual fodder. I'm no expert but I've been curious about EA/"rationalism" for a few years. It's so interesting to (try to) think this way, but I strongly agree about its incompatibility with US history and human nature. It's just weird, at least for me, to try think about ways to try to contribute to good outcomes with no regard whatsoever to your own community. I don't think most people operate at this level of abstraction.

Your observation here about trade-offs is so important, I think that is a *huge* gap in governance (and citizen participation) right now at every level. They are almost never discussed, at least openly. The popular perception of politics as a sport, and every issue/election as some kind of game, is also unhelpful, with people unquestionably aligning themselves with whatever view their side/team advocates for, is also deeply unhelpful. This large-scale gross oversimplification, and the related assumption that there are simple and obvious answers to complex and longstanding problems, has failed society across so many dimensions, including but of course not limited to pandemic response.

Expand full comment
KetamineCal's avatar

I've never been able to quite square how local charity/economy gets outsized attention while local politics are virtually ignored.

Expand full comment
Mark's avatar

You’re onto something with this comment.

Expand full comment
City Of Trees's avatar

There was a whole Simpsons episode made on the failure of MENSA members to turn around society:

https://en.wikipedia.org/wiki/They_Saved_Lisa%27s_Brain

Expand full comment
Mark's avatar

Hah, I was thinking of this episode as I wrote that comment.

Expand full comment