363 Comments
May 23, 2022·edited May 23, 2022

I think with a lot of the EA movement specifically, a lot of people have trouble squaring the source of funds and their use – you can buy someone like Elon Musk or Bill Gates as a philanthropist, they made money by making things, but many people (including myself) perceive crypto billionaires as having made money off false promises to everyday people. If you see crypto as a ponzi scheme, it becomes harder to believe the people in it are genuine altruists.

Obviously the classic argument exists that you can do some evil to make money to do good of greater utility, but we know human beings are self-serving, and this is just a retread of the politicians who compromise everything to win power because 'without power we can't help anyone.' Well those people get power and usually don't help anyone anyway. You can also argue that they think crypto is good but, well, I'm not sure that's an argument for the effectiveness of their altruism.

I'm a believer in the principles of effective altruism – in fact, I've structured my career to do meaningful work on an important public health problem, and taken a large pay cut to do so – and I'm funded by one of the large foundations, so I see the role rich funders play. But as often happens in the modern age, being effectively altruistic and being 'in the EA community' are not the same thing, and these subcultures form, become insular, and lose sight of their goals. Long-term utility is precisely one of those areas that allows human beings to subconsciously skew their thinking around their own selfish interests, and in fact the 'rationalist' community seems to largely be about internal status-seeking around issues like AI risk than thinking through real problems, as well as making the fundamental 'the world is just an engineering problem' error common to both crypto and online rationalism.

Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all.

Expand full comment

Maybe it’s “not about crypto” but if the biggest name in applied consequentialism right now is, essentially, in the business of marketing Ponzi schemes to the gullible as his day job, that seems kinda relevant to me when approaching the question of how much weight I should assign his opinions about existential risks?

Expand full comment

I have really only come into contact with EA through Slow Boring, but, to be honest, they seem like a bunch of super creepy weirdos.

The no community ties thing is a nudge away from discouraging the formation of friends or families.

The "earn to give" thing has strong "prosperity gospel" vibes, where all sorts of shenanigans are self justified.

And the AI thing is just sorta weird.

Expand full comment

EA seems like a complete political non-starter. It combines the worst elements of neoliberalism (cold, reductive focus on efficient generation of dollars/hedons) and progressivism (esoteric and unpopular views held predominantly by people with a college degree).

Expand full comment
May 23, 2022·edited May 23, 2022

The discount rate is not meant to say “these people are less valuable”, but rather to say “how the fuck are we to know anything about our descendants 100 million years from now.”

A bit of epistemological humility is a good thing, not that I’d expect the EA crowd to admit that for even a moment.

Expand full comment
May 23, 2022·edited May 23, 2022

This post reconfirms my sense that EA is, to a first approximation, a faith community with all of the virtues and vices that being a religion entails. They have the capacity to do a lot of good charitable work. They also have a lot hand-wavy self-justification for why wealthy, fortunate, and privileged people should feel great about themselves. And they have a really compelling set of stories about how someday the world ends because the AI serpent is unleashed into the garden and brings the apocalypse.

Expand full comment

I have a hard time squaring a 0 future social discount rate when the justification for applied consequentialism rests on abandoning the artificial certainty of trolley-problem style scenarios.

I think there exists some future social discount rate that can be derived in part from the fact that things get more uncertain the further into the future you go. We should weight that which is more certain over that which is less certain, and on a sufficiently far out future horizon, that means we're weighting present or nearer-future concerns over more far future concerns.

I could have just talked myself into a circle here, since the 0 discount rate folks could just say that all I've done is describe using probabilities to drive decisions. So maybe it's just my being-alive-now squeamishness expressing itself.

Expand full comment

I first encountered SBF through the Odd Lots interview, where as Matt Levine summarized he basically said “I’m in the Ponzi business and it’s going pretty well for me.” Crypto is unraveling in the public eye as being *in its current manifestation* mainly a huge bubble full of speculation and fraud. Maybe the huge injection of capital into the space ends up actually creating some products with real-world use cases for normal people, but right now it’s mostly self-dealing, hype and hand-waving.

Having that associated with EA in any way is unbelievably toxic to EA. And if we take consequentialism seriously as Matt suggests here, that means it’s bad for crypto money to go directly to EA candidates.

I also want to note that the way SBF “tried” to elect Flynn was just unbelievably amateurish. “Parachute into a local race and shower unknown candidate with your shady millions” is just awful political strategy. It’s also super short-termist!

Plus, “how do I get my unpopular political/economic beliefs written into law” is not uncharted territory - you can just look at the success of the conservative legal and economic movements over the last 70 years or so. They used a long-term approach, building networks, taking over institutions from the bottom up, and now have captured seemingly most of the US court system and many of its legislators.

The main problem with EA is that it has a very small and rabid base and basically no support or even awareness outside of that. The playbook for taking political power when your base is that small is basically “do a coup”. Which I don’t think the EA folks want but maybe that’s their utilitarian calculation?

Expand full comment

"We are not 100 percent bought in on the full-tilt EA idea that you should ignore questions of community ties and interpersonal obligation"

I'm glad you're not bought in on this, because this idea is the feet of clay of the entire EA edifice. From a public policy perspective, none of the EA ideas amount to a hill of beans unless they're embedded in the legal system of a specific community and society. Which means they have to, first and foremost, serve the interests of the members of that community and have local legitimacy. It's fine to talk about how noble it would be if we all gave away our possession to those more needy. But the meta-consequentialist approach to fads like EA has to always remember that the real-world track record of trying to put consequentialist ideas like that into practice, in their pure form, had tended to lead to some pretty bad, totalitarian, Animal Farm-like outcomes.

Expand full comment

I thought crypto was terrible for climate change. Doesn’t it kind of defeat the purpose of longtermism when the thing you do to make your money is actively harmful?

Expand full comment

My biggest gripe with EA is that they often assume to have significantly more knowledge of the consequences of their actions than they could actually have. It would be like if someone was was in a trolly-problem scenario, flipped the switch to kill just one person instead of five, and only later realized that there were actually three people on the new track (and only one on the original track).

Earn-to-give also has its problems. First off, there are some professions that will just do more harm than good, no matter how much money the people doing them donate to charity. If El Chapo or Mohammed bin Salman went full-EA and donated the billions they earn to charity, that wouldn’t necessarily mean that their overall effect on the world becomes positive. Second, it’s a bad case of simplistic thinking to just assume that more money going in to charities will automatically translate directly into more good being done. If earn-to-give were a categorical imperative and expected of everyone, then who would actually staff these charities that are supposed to turn dollars into things that benefit the world? The most effective workers would always choose the work with the highest pay, work that might be doing far more harm than the good that less-effective workers can cause using their donated funds.

Expand full comment

I’ve never found the trolly problem difficult. Anyone who values his own “integrity” more than four lives is a sanctimonious asshole. I’m a consequentialist but, like most people, I’m somewhat selfish.

I could stop drinking wine, take my kid out of private school, double my work effort, and spend the proceeds of these changes on mosquito nets for poor strangers in Africa. I simply don’t care enough about other people’s happiness to do that, nor has my study of consequentialist ethics suggested why I should care about hedons in other people’s’ brains as much as my own. My brain is the center of my world!

I want to know more about the lifestyles of rich effective altruists. I can totally understand getting more pleasure out of buying mosquito nets than buying a yacht and a private jet, I might do the same if I had the means. But I would never give up interesting food or a comfortable dwelling to buy mosquito nets. How many people actually do that? Are effective altruists more or less ascetic than medieval Christians? Deep insights into human nature are available to those who collect the data.

Expand full comment

One problem I've long had with consequentialism is that, in practice, it's pretty easy for a smart person to make a plausible consequentialist argument for a wide variety of actions. This makes consequentialism slippery and hard to argue against, and also opens up room for motivated reasoning.

The focus on AI risk is a good example. A bunch of nerdy people who are really interested in technology and analytic philosophy get together and form a consequentialist charitable movement. At first, they decide that donating large sums of money to global public health charities is best. But that's a boring basis for a tribe, even if a noble one, and means its members are at best just funding the good works of others. But then the EA movement collectively decides that the most important issue facing humanity is actually AI risk, which means that nerdy programmers who are into analytic philosophy not only can but are morally obligated to save humanity by thinking about philosophy of mind, rationality, and computer science. That seems like a much more fun basis for a community, but it seems fishy that that is where things landed, doesn't it? I'm not saying anyone is acting in bad faith, just that we're all flawed humans who are prone to motivated reasoning.

(Don't get me wrong, I'm partly persuaded by arguments for AI risk myself, but I am suspicious of how much it is elevated relative to other issues in EA circles)

Expand full comment

It is a sad but predictable part of our political climate that someone like SBF is seen as self serving. It reminds me of how YIMBYs get accused of being shills for "evil developers" by NIMBYs when 99% of the people in the movement literally just want more housing, and probably over half themselves already have secure housing and therefore no real personal motivations.

I don't know SBF, but I know a few people who were crypto enthusiasts early who did *very* well. A few did the predictable things and became douchebag showoffs, but most quietly sold off their crypto holdings and now largely do philanthropy, but through organizations that make effective use of the money.

Seems like a thing that often, but obviously not always, happens with new money.

Expand full comment

Some number of EA believers should take their talents to local governments, particularly midsize to large urban governments, which often have the ability to execute meaningful solutions. Seems like EA lives in the philanthropic world, which can be OK, but they are missing out on change opportunities in local (not state or federal) government.

Expand full comment

EA is good intellectual fodder. And I agree that we need more coordination at the federal level to deal with issues like future pandemics, climate change, and AI. But using EA as a philosophical foundation for governance is not going to generate positive electoral outcomes. It is ungrounded in both American history as well as human nature. The only circumstance under which I could see it having a decent shot is if the electorate were comprised of MENSA members and Less Wrong contributors.

Expand full comment