I think with a lot of the EA movement specifically, a lot of people have trouble squaring the source of funds and their use – you can buy someone like Elon Musk or Bill Gates as a philanthropist, they made money by making things, but many people (including myself) perceive crypto billionaires as having made money off false promises to everyday people. If you see crypto as a ponzi scheme, it becomes harder to believe the people in it are genuine altruists.
Obviously the classic argument exists that you can do some evil to make money to do good of greater utility, but we know human beings are self-serving, and this is just a retread of the politicians who compromise everything to win power because 'without power we can't help anyone.' Well those people get power and usually don't help anyone anyway. You can also argue that they think crypto is good but, well, I'm not sure that's an argument for the effectiveness of their altruism.
I'm a believer in the principles of effective altruism – in fact, I've structured my career to do meaningful work on an important public health problem, and taken a large pay cut to do so – and I'm funded by one of the large foundations, so I see the role rich funders play. But as often happens in the modern age, being effectively altruistic and being 'in the EA community' are not the same thing, and these subcultures form, become insular, and lose sight of their goals. Long-term utility is precisely one of those areas that allows human beings to subconsciously skew their thinking around their own selfish interests, and in fact the 'rationalist' community seems to largely be about internal status-seeking around issues like AI risk than thinking through real problems, as well as making the fundamental 'the world is just an engineering problem' error common to both crypto and online rationalism.
Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all.
"Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all."
I'm not sure I understand. He found a crypto trading arbitrage that generated millions of dollars which he intended to give away. He went from that to founding FTX which mad him a billionaire and allowed him to give away vastly more money. Would it have been better if he stayed at the Centre for Effective Altruism?
I'll concede here that this was a bit of a hyperbolic flourish to end the comment – in the end, in this particular case, it is better for the world that he is able to give vastly more money to (hopefully) effective causes.
But in general, I find this mythical certainty to be ethically problematically, like all those actors who came to LA just 'knowing' they'd make it and no one believedin it and here they are at the Oscars. Well, no one's interviewing the waitresses and the busboys, are they? So most people who do what he did here are pretty likely spending years trying to get their business to work, or even more likely failing to do so, as most people do. A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan. What if it was just luck that it worked for him, and we haven't heard from the dozens of finance people who are telling us that it'll all be worth it when they're billionaires and save the world?
Broadly speaking, I think most consequentialists undervalue the uncertainties of the human mind, especially their own. So I think taking a decision like this, where you go from definitely working on something good in a small way to devoting yourself to making money exclusively with the intention of giving it away, undervalues your own uncertainty about both your success and your future decision making. I think if you took everyone who told themselves 'I'll just focus on being rich now, I'll do some good later', I'm not sure you'd wind up with much of a net good in the aggregate.
"A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan."
I don't know, I'm not sure I've seen this. In fact I have the opposite experience with EA folks in that they tend to fetishize probabilistic thinking to an unhelpful degree sometimes. But more to the point I seriously doubt that SBF was certain it would work out and he would become a billionaire. In fact, in interviews I've heard with him he talks about giving FTX a ~10% chance of working out (i.e. not failing outright). But even in that case it is still a positive expected value play. If it fails, he goes back to Alameda of some other prop trading firm and continues giving a few hundred thousand a year to charity. But if it works, it could be huge and he could be giving away billions (which is what happened). Even if that latter is unlikely it is still worth doing because the payoff is huge.
More generally, I think the way you characterize the amount of certainty in the EA community is not accurate. My experience reading EA thinkers is that wrestle with uncertainty in a very clear-headed way.
I second this- one main piece of philosophy that SBF has talked about specifically in this vain is that there is decreasing marginal returns for making more money for an individual, but far, far less for trying to improve the world, so people should be less risk averse and try high-variance earning to give strategies. As a community, if we have 100 people making 350k/yr or 100 people all trying to make billions with a 1% chance of success, we'd have more money in the latter option- and he thought he had a 10% chance of success, so even better that he took that bet
The argument is that high risk earn to give opportunities like entrepreneurship are positive expected value, no that you're certain to succeed. If for every dozen EA entrepreneurs you expect one to make hundreds of millions that they give away, that's a good deal even if the other 11 fail.
In-general many EAs argue for being risk-neutral, so taking riskier bets with your career than you otherwise would if you were just earning money for yourself.
Maybe some of my antipathy comes from interacting with people in the world – I have spent a large amount of time in the humanitarian sector and a fair bit in the well-meaning-startup space – and the vast majority of the time EA was used as kind of a prosperity gospel. 'The only thing I owe to the world is to become a billionaire – nothing matters ethically beyond that. If I do, I can start being a good person. If I don't, statistically, someone else did, so I can continue giving nothing back.'
Maybe this is an unfair characterisation, but human beings being who we are, I think it's a philosophy that's very self-serving. I appreciate the probability focus the writers for EA websites are doing like – like I say, I genuinely believe in lower-case effective altruism – but what I've seen in the real world is either an Ayn Rand level of selfishness justified by deferred responsibility, or throwing money into cool-looking but useless projects in the developing world. I presume the counterargument is that since it's ineffective it isn't by definition EA, but at some point the debate starts being academic.
Pretty much everyone I know in EA is in law or finance, making 500k - 3 million a year (not billionaires) and is mostly giving money to organizations like Give Directly or others ranked very high in GiveWell. Maybe there are people like the ones you describe, but my guess is that EA is a handy excuse and if they didn't have that, they'd make up something else.
I think you're right to push back against people who say they don't need to bother doing good until / unless they become a billionaire. That's on the more extreme end of what I've heard, but arguments like this are a problem in EA and tend to be a huge turn-off to great people (like you!) who might otherwise be valuable community members. For what it's worth the EAs who I know basically universally hate Ayn Rand.
I'm curious - do you remember any examples of cool-looking but useless projects in the developing world that you've seen people throw money at?
what3words is my favourite example of absolutely terrible ideas that sound good to engineers. There have also been a couple around using blockchain for land registries which completely fail to take into account how smallholders are dispossessed (the person who can take your land by force can force you to send him your token as well). Numerous platforms of the general '[X] for [Y]' concept (LinkedIn for Refugees! Etsy for Weavers!) that never offer any advantage over existing commercial sites. Several things with drone delivery, although that one I concede may work if done correctly. There have also been any number of photo op projects with crypto dudes coming into small villages to take pictures, I'm sure I can dig some up.
To be fair, I myself have built any number of completely useless and expensive software platforms funded by governments and NGOs in the developing world – I'd say 1 in 5 were any good when I was consulting – so finding a project that has real, direct impact is hard. And I'm very critical of the aid industry as well. But a lot of the Silicon-Valley-engineer projects parachute in with lots of hype and just make no sense given the realities on the ground.
The Hollywood waitress analogy doesn't work at all. Most no-name EAs are software developers, traders, corporate lawyers, etc. who still make buckets of money even if they aren't billionaires. Tbh it's kind of hard to not make money if you're a Stanford CS graduate, which highly disproportionate number of EAs are.
From Matt's description of EA, I'm pretty sure the Chinese Communist Party would describe itself as the world's largest and most successful enthusiasts. The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat.
"The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat"
I'm not sure where you're getting this but I say that this is very much not the gist of EA. Have you actually heard anyone in the EA community say or write anything resembling this?
I don't suppose SBF also has views on tax policy and regulation of crypto that he'd expect his preferred candidate to share, not to mention income inequality. Perhaps he could share those with us. The notion that the effective altruist billionaire class should determine what are the world's largest problems and their solutions instead of voters and their representatives seems laughable to me. While getting huge tax deductions payable by the rest of us for their pet causes.
There was an election - SBF didn't decide anything for anyone. And, even if elected, Flynn still had to participate in the legislative process along with all of the other elected officials.
Everyone has blind spots. And I assume most people would say "the government is stealing from poor people by taxing EAs" is a bridge too far, even if it may technically be true in some cases.
I think most EAs DON'T think that but it's pretty easy to expect some "libertarian" type will eagerly co-opt it. Humility is often a surrogate for trust and the EAs don't exactly have that.
I think a lot of it is that crypto bros can be unbelievably self-confident and annoying. No one is projecting humility from that space. I assume many SB readers have been the "young bright person at work who wants to and knows how to improve things but gets ignored" and this is a similar phenomenon.
I have another comment below with some of your points that I disagree with, but for what it's worth I do think you're right about internal status-seeking around AI safety and insularity in general being cultural problems within (some subsets of) the EA and rationalist communities. I think AI risk is a real and important problem and want the people who work on these problems to have a healthy community.
Has anyone seen an attempt to figure out how we weight the negative effects of different careers in the "earn to give" approach (or traditional philanthropy model) - that would seem necessary to make choices regarding effective altruism? Obviously it's going to be hard to agree on anything for something new like crypto, but for something like a more typical "career in finance", can we say anything specific about the impact? If I make X million from pure trading activity, then other people have clearly lost X million, resulting in negative impact for those individuals? There would also be indirect impact from supporting the general trading system and the degree to which it might push companies into making short-term decisions that may harm employees or customers, but that's going to be rather more difficult to measure (and would quickly turn into a debate regarding pros and cons of the whole economic system...)
There's also a separate interesting question of the impact of trying to maximize your income within a given company, if increased wages / bonuses for some categories of employees leads to pressure to reduce wages for others?
<I>If I make X million from pure trading activity, then other people have clearly lost X million</I>
That’s not correct. If you think oil prices are going to fall you could sell Delta Airlines a futures contract to deliver X barrels of oil to its refinery in six months. If you’re right you can buy the oil for less and sell it to Delta at the agreed price. Delta is fine with that as they need predictability to price tickets, plane routes, etc.
I think that's true enough when we're talking about those fundamental transactions, but are we operating in a world with so much money sloshing around in the financial sector that it's effectively a different thing now than what we were taught in Econ class?
The amount of "finance" that is speculative trading activity is just not that high. You are missing all the things like routine financing, private equity (which despite the weirdly bad publicity just comes in and runs companies better to IPO or sell), and risk distribution for existing lending (eg CLOs where investors take the risk and the bank books immediately).
Realistically, though, there just aren't enough hedge fund jobs to make this a big problem.
Trading is a positive sum activity. You are finding more accurate information. If it is in fact zero sum, then people would leave the market if they’re getting beat.
We can argue about the usefulness of crypto, but the first statement isn't true. If I tell you I have a product that can turn lead into gold, and sell it to you and it does no such thing, that's not a useful product. Obviously I created something you wanted, say false hope, that might have 'utility' under some strict definition, but most people would consider the product itself as useless and the transaction as fraudulent. Information gaps prevent perfectly efficient markets.
Substack has some sort of lag problem in its UI between the clicking of the heart (which does get registered immediately) and the display of showing that you clicked the heart. It also seems to be worse on non-top level comments. My advice would be to just click it once and wait, as clicking it again might undo the like.
Traditionally people probably didn’t really believe the stuff they showed up to church for. They certainly didn’t read their scripture or understand their Latin speaking priests - the idea that you’d even want to do that is an invention of Protestantism. (and the idea that Buddhists did that comes from them importing Protestantism too)
Traditionally the church in England 1. told people not to have sex the wrong way 2. was used to fundraise constant parties for “saints’ days”.
I'm not anti crypto, call me neutral on it, but you just argued crypto is useful because people (specifically yourself?) made money on it. Not sure that logically computes...
Well, there are because essential components like connections to the real world can’t be “on chain”. Exchange behaviors, side bets, whether a project is a rug or not, that kind of thing. The big one being Tether, who wasn’t public at all until NY sued them and now seems to be lying about their assets.
Plus, public smart contracts regularly get hacked for huge amounts so it doesn’t seem the auditing systems work very well. I don’t think it can ever be safe to build on a system like this where mistakes/bad transactions can’t be reverted. The ability to do that is more important than any tradfi technical drawback.
'. . . the fact that it is still going pretty strong after more than ten years and has survived several major crashes shows that it is not a Ponzi scheme.'
When people say crypto is a Ponzi scheme, I don't think it's meant literally; it's a metaphor. There may be some inherent value to the technology, but it appears to be wildly overvalued in the market at the moment, having a bubble is something that can persist for a long time, and people who are invested in the bubble have a material interest in keeping the bubble inflated.
For example, Tesla stock seems to be overvalued by conventional measures, even with the haircut it's been taking lately. That doesn't mean the company itself is running some kind of scam; the people I know who own one are very happy with the product. There's just an apparent disconnect between what the company is doing, and what the investors in the stock are doing on the secondary markets.
There is a tedious debate many people have about 'what is a Ponzi scheme'. Crypto-skeptics often say 'crypto is a Ponzi scheme', and crypto-enthusiasts then dispute that it doesn't meet various criteria to be considered a Ponzi scheme.
To be honest, this debate does not interest me very much. Things change, and new things are made, and not everything is an exact analogue of some past thing. The relevant point is that crypto assets largely do not represent sources of wealth exogenous to the system of their trading (in the way that stocks, bonds or commodities do). Consequently, what crypto investing shares with a Ponzi scheme is that there is a broadly fixed sum of capital invested, and that what Person A gets in profit comes from loss from Persons B, C and D. In that sense, it is analogous to a Ponzi scheme.
I would be amazed if the percentage of total current crypto wealth owned by people in "imperialist" countries is lower than the percentage of world gdp that those countries represent. So your best case scenario is we've created a new paradigm that exacerbates historical wealth inequality?
> Distributed ledgers *may* someday do so in private applications.
I think they have a genuine mindset advantage in that you can easily delete/overwrite data in a traditional database and it’s difficult to replicate some of them. Immutable data is very useful for correctness and programmers aren’t taught to use it enough.
…but all that technology was invented in the 70s and “blockchain” reinvents it in the least efficient way possible plus tries to add a smart contract language designed by amateurs that didn’t think about things like integer overflow first.
Maybe it’s “not about crypto” but if the biggest name in applied consequentialism right now is, essentially, in the business of marketing Ponzi schemes to the gullible as his day job, that seems kinda relevant to me when approaching the question of how much weight I should assign his opinions about existential risks?
It’s an interesting new development. Up until a year or two ago I would have thought of Bill Gates as the one famous donor who was most explicitly affiliated with Effective Altruism, though he tries to cover it up in order to seem more normal.
Is he explicitly affiliated? I always took his philanthropic actions as a rather banal "help the world's neediest" without all the philosophical forays described in this article.
Not quite explicitly. But even before the official Effective Altruism movement got started, the Gates Foundation got into the idea of evaluating their charity by concrete metrics. And Gates and Buffett created the Giving Pledge, inspired at least in part by Peter Singer.
It is just a completely different thing to try to objectively measure the performance of a charity versus what the EA people do. The difference is in the EA philosophy of what outcomes you consider desirable or how you weight their desirability. The Gates approach is perfectly understandable to a normal person, the EA part is not. That being said, I have no idea how much Gates buys into the EA part or not.
I don't think that's true. "Objectively measure the performance of charity" is exactly what GiveWell does, which is probably the most widely known and influential EA organization in the world.
In fact, "objectively measure the performance of a charity" is a pretty good description of what EA is. It's just that words like "measure" and "performance" aren't so clear when you try and dig into the details. Performance relative to what outcome? What are we measuring exactly? And so we get a lot of internet arguments about what outcomes we should care about (including how much to care about people in the far future, etc).
I think my post was unclear. The philosophy of the EA movement that we should not place any more weight on our own community comes in addition to the attempt to objectively evaluate charitable impact. But I believe these are completely separable ideas and the EA specific one is a completely foreign one to most people, while the objective evaluation is perfectly normal to them.
But I see that as the sleight of hand that EAs use to nudge people toward their preferred causes. I may be determined to give a non-EA-preferred cause, but I would also like to know which charities have the better performance in achieving that non-EA-preferred cause.
Elon is the most famous person to be literally associated with the religious EA people - he got his e-girl musician girlfriend by making a LessWrong joke at a party and founded OpenAI to literally do their silly ideas about evil computer defense. OpenAI of course abandoned that mission because it has nothing to do with any real life concerns.
...so? If you think that institutional investors are immune from the delusions of crowds, I have a CDO bond that I'd be happy to let you in to the AAA-rated (by Moodys!) top tranche of.
I think you are entirely overconfident if you think it is obvious that institutions are being gullible by investing in crypto. And even if you are right "its not fair this guy is ripping off Wall Street and we shouldn't trust him" is a super weird take.
- I have no particular insight into how much institutional exposure to crypto there is, and I very much hope it's "less than I fear"
- but I fear it's a lot: nobody knew how overextended on forex trading Barings Bank was until Barings ceased to exist and that was _one rogue trader_. One of the major morals of 2007 was, for me, that no compliance department on god's earth _actually_ knows what their trading desks are getting up to, and the ratings agencies will rubberstamp known bilge without a second thought.
- "ripping off wall street" is all fun and games until it tanks the entire economy taking your job and your retirement savings with it, which already happened once in the last 20 years and I'm not looking forward to Round Two: Crypto Boogaloo
Being able to ride out a market crash because you're in your 20-40s and have a stable job and can buy the dip is great. (Source: me, stayed employed in tech through the 2001 crash and the entire Great Recession, was definitely preferable to all the alternatives.) But allow me to introduce you to the extremely salient concept known as "time", which can crop up in two rather important ways:
- if you are approaching or at retirement age, having your pension/401k/IRA wiped out is, in fact, a pretty big deal: your heirs might get to ride the bounce back but there's every chance that you won't.
- rent (or your mortgage payment) is due on the first of the month and they don't care what happened to your retirement savings, nor do they care if you lost your job because of an economic crash. Similar dynamics apply to your grocery and heating bills.
I feel like "massive economic crashes are bad" should not be proposition needing to be re-litigated after 2008 but apparently here we are?
I’m not sure if the EA philosophy leads to this, but the larger rationalist community does act like they’ve accidentally logiced themselves into joining a cult. They like to live in group homes, are into polyamory because they couldn’t think of logical arguments against it, and do sometimes join 100% literal EA cults like Leverage Research.
Beyond unhealthy absolutist philosophy, I think joining cults might just be what people do when they live in Berkeley. It’s in the water.
The lack of community ties thing troubles me as well. Yes, EAs say that you can still have friends and family, but only because people would burn out if they didn't. Which strikes me as abhorrent. My relationships with others are deeply important and morally justified, they aren't simply a means to end of giving me the emotional bandwidth to work longer hours so I can donate to EA charities. I think people are deeply entitled to pursue and value particular relationships and projects in their lives for their own sake. No exclusively of course, I do think everyone should try to devote some of their efforts to making the world better in a consequentialist way, but it's only one aspect of the good life.
While there are some lovely things at the abstract philosophical level with EA, what would make the world a far better place is if we could somehow convince everyone to increase their normal charitable giving 10%.
There may be some who say this. But I think there are others who say that friends and family are what make life worth living and worth saving, and that you are a person just as much as anyone else is, and that you deserve as much time with your friends and family as anyone else. But if you can help a hundred other people have more quality time with their friends and family, it may well be worth sacrificing some of your own for that.
I don't think any EAs think that you should give up on having friends and family. I think it's clear to everyone that that would be an unhealthy recipe for disaster.
EA reminds me a lot of those who believe that business should maximize its profits and no other objective is morally justified(1). Both are principles with a lot of heft behind them that are also overly simplistic and undermined by a lack of curiosity and humility.
Pursuing profits drives economic efficiency, which does a lot to improve social welfare, even if it also generates lots of negative externalities. But some take it to imply that promoting employee welfare at the expense of profits is unwise without even asking whether “investing” in employee welfare can generate a long-term return. Similarly, pursuing EA would probably improve the world greatly. But it takes incredible hubris to believe those a calculation of the expected harm from AI, based on no data, is correct and a greater potential harm than present issues like the impact of air pollution on health and inequality
What makes both camps seem like “weirdos” is the way that they are so simplistic about the world. The human instinct that things like social ties matter should be respected and those who don’t should rightfully be looked at askance.
1. I’m not supporting the “Friedman Doctrine” that supports profit maximization based purely on the idea that it’s the only way to represent the interest of owners, but the more general idea outlined above.
It strikes me a lot like the rationalist community. Many of them make good points, and society would benefit if most people moved on the margin towards their stated tenets. However, lots of in-group status seeking, lack of nuance, homogeneity, and a complete lack of humility makes me nervous about fully buying what they're selling. Personally, I'd prefer EA people to donate and encourage others to donate than rum the government.
There's a lot of overlap, especially with adherents, but I wouldn't say they're the same, since they're about different things. Personally, I'd describe myself as EA-adjacent, but the rationalist community creeps me out.
Yeah, I was specifically thinking about Julia Galef, who seems to be Matt’s introduction to EA but is (or used to be) one of the LW style rationalists who thinks you shouldn’t use your brain normally and should instead explicitly calculate Bayes’ theorem on everything, or possibly just pretend you’re doing that and say the word “prior” a lot.
He’s also talked about SSC, who is eh, good on his professional area and isn’t too deep into the religious aspects of that community, but has a strange personal life mission to get everyone to learn about his weird online friends who invented versions of conservatism nobody else believes in.
No, it started as philosophy grad students eating only rice and beans to give more money to Oxfam because they were persuaded by Peter Singer. I don't really know the intellectual history but I think the shared fondness for thought experiments about the far future plays a role.
I don't really disagree with any of that, and the first EA stuff I heard about was largely inline with that.
But, increasingly my impression is that the self defined EA group are high on their own supply.
And as a guiding philosophy for representative democracy it makes no sense, "Vote for me as your Rep. and I will assign no priority to you or your community's interests whatsoever. "
On the other hand "vote for me and I will focus on substantive, evidence-based policies to improve your life rather than empty ideological posturing" sounds like a good sales pitch for a politician.
It *sounds* like one, but I don’t think it always sells. Trump’s election pitch was much more “vote for me and I will ideologically posture on your behalf against the nerds that aim for evidence”. People care a lot about symbolism.
Somewhat, but I think a big part of Trump's appeal was that he was a "businessman" who knew how to get things done. Obviously in office he leaned much more heavily on the ideological posturing but I think people generally underrate how much people voted for Trump in 2016 because they though he would be effective.
Like Andrew, I don't disagree with this, but what bothers me is I think most EAs only advocate for the 10% rule because it's more likely to persuade people to get on board, not because they think only giving that much is sufficient. They would say that you should give up your relationships to focus more on earning to give if you thought it would be sustainable for you. I've explored the (online) EA community, and most of them *are* dyed-in-the-wool act utilitarians. So while I'll happily donate money to Give Well charities, I'm suspicious of the broader movement.
In my experience, most EAs are realistic that doing something like donating 50% of your income is not sustainable or psychologically doable for most people. But consider that there are large numbers of people working for normal non-EA charities who are effectively taking huge pay cuts because they want to do good. A lot of people genuinely want to devote their life to making the world a better place but aren't sure how. It's good that EAs are clear-eyed about it.
EA seems like a complete political non-starter. It combines the worst elements of neoliberalism (cold, reductive focus on efficient generation of dollars/hedons) and progressivism (esoteric and unpopular views held predominantly by people with a college degree).
You forgot the neo-feudalism and pseudo-religiousity.
We’re basically talking about the nobility giving out boons to deserving members of the peasantry in exchange for indulgences regarding how they came to be nobility.
You are implicitly viewing wealth accumulation as coming at the expense of someone else. Thankfully, it doesn’t. Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants.
So, something like earning to give is good, *even if you never give!* It is far better to do something productive, that people actually *want*, than to work a personally satisfying but low paying job.
My other comments at SB offer sufficient clarity on my thinking that I don't feel obliged to explain that I do, in fact, understand the concept of value generation.
"Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants."
But this sentence is sufficient to prove to me that you prefer to exist in some abstracted theoretical realm in which "lobbying" and "rent-seeking" are not concepts which run rampant through the American economy.
The EA people are a mix of professional class types looking for an endorphin hit (fine, better to get it helping poor people than snorting coke, but stop being a preachy fuck about it) and the rich seeking yet another reason to claim their iron hold on the American economy is in fact a good thing.
Also, in support of my initial response, you are explicitly on-record saying that policies which support a decent standard of living for first world citizens are bad because they take money away from wealthy EAers who will donate it to third world citizens.
I quote: "Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?"
Since you've already taken the extreme-to-the-point-of-parody view and put it on record, I think we're done here.
Not so - lowering taxes so that more gets donated overseas is a very inefficient way of doing that. I am arguing that we should have *massively more* foreign aid spending (as well as completely open borders.)
“It’s my duty to earn as much as possible by any means because I have the vision and wisdom to disburse funds according to the interests of the greater good.”
It says nothing of a duty to create a society in which everyone is earning a decent living through the fruits of their own labor, instead of relying on the rich for oh-so enlightened, “targeted”, “optimized,” handouts.
That latter outcome is something only policy and politics can achieve.
FRIGID:
You put the finger on my general unease at WA, but which I couldn't articulate. And this is why I prefer predistribution vs redistribution. Change the rules of the economy such that work is rewarded vs hope the rich toss a few pennies downwards.
ME:
Policies that do that are actively *bad* in the current EA framework because they redirect income from donation-minded rich westerners to middle-income and working class westerners who are, by global standards, already rich.
It’s a terrible, profoundly fucked ethical framework on a macro level.
YOU:
Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?
Again, you're explicitly on-record opposing policies that will improve quality of life for American/European workers because it will redirect dollars from (in your mind) ultra-poor EA donor recipients to rich-by-global-standards working class Westerners.
But so would curtailing rent-seeking by capital in the US, so would regulating crypto effectively, so would introducing a payroll tax-funded universal healthcare scheme...
I’m trying to avoid replying to you, but this is just too obvious for even you to fail to understand:
The job of avoiding neofeudalism should not be left in the voluntary hands of the would-be neofeudalists, it should be forced upon them by the citizenry using the power of the state that they’re so desperately attempting to buy off at every turn.
EA isn't libertarianism, it doesn't imply that the state shouldn't also be redistributive. EAs generally support the standard laundry list of Democratic Party policies, but think e.g. foreign aid and asteroid prevention should be given more money.
Given that, it's only "voluntary" in the sense that all personal morality is "voluntarily." Of course people can choose not to follow it, but then they're not EA in any sense.
EA is dead in the water because it's a thin veil for the bad guys, duh.
Virtually no one gives away enough to crimp their kids' ability to be at the top of the heap, EA is just another of many figleaves and/or outright bribes designed to keep the rest of us from taking it all from them after they die.
Which is, to my mind, an inevitability. It's soon going to be near-universally understood that permitting the intergenerational accumulation of wealth above a level sufficient to, say, generate a few multiples of the median income in passive income, is directly incompatible with democratic governance and the general well-being. At which point "inheritance taxes" are no longer going to be a question of revenue but of self-defense.
EA is yet another attempt to hold back the tide on th
I think with a lot of the EA movement specifically, a lot of people have trouble squaring the source of funds and their use – you can buy someone like Elon Musk or Bill Gates as a philanthropist, they made money by making things, but many people (including myself) perceive crypto billionaires as having made money off false promises to everyday people. If you see crypto as a ponzi scheme, it becomes harder to believe the people in it are genuine altruists.
Obviously the classic argument exists that you can do some evil to make money to do good of greater utility, but we know human beings are self-serving, and this is just a retread of the politicians who compromise everything to win power because 'without power we can't help anyone.' Well those people get power and usually don't help anyone anyway. You can also argue that they think crypto is good but, well, I'm not sure that's an argument for the effectiveness of their altruism.
I'm a believer in the principles of effective altruism – in fact, I've structured my career to do meaningful work on an important public health problem, and taken a large pay cut to do so – and I'm funded by one of the large foundations, so I see the role rich funders play. But as often happens in the modern age, being effectively altruistic and being 'in the EA community' are not the same thing, and these subcultures form, become insular, and lose sight of their goals. Long-term utility is precisely one of those areas that allows human beings to subconsciously skew their thinking around their own selfish interests, and in fact the 'rationalist' community seems to largely be about internal status-seeking around issues like AI risk than thinking through real problems, as well as making the fundamental 'the world is just an engineering problem' error common to both crypto and online rationalism.
Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all.
"Frankly, this is the telling sentence 'He briefly [worked] directly for the Centre for Effective Altruism...but while there hit upon a crypto trading arbitrage opportunity.' Kind of says it all."
I'm not sure I understand. He found a crypto trading arbitrage that generated millions of dollars which he intended to give away. He went from that to founding FTX which mad him a billionaire and allowed him to give away vastly more money. Would it have been better if he stayed at the Centre for Effective Altruism?
I'll concede here that this was a bit of a hyperbolic flourish to end the comment – in the end, in this particular case, it is better for the world that he is able to give vastly more money to (hopefully) effective causes.
But in general, I find this mythical certainty to be ethically problematically, like all those actors who came to LA just 'knowing' they'd make it and no one believedin it and here they are at the Oscars. Well, no one's interviewing the waitresses and the busboys, are they? So most people who do what he did here are pretty likely spending years trying to get their business to work, or even more likely failing to do so, as most people do. A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan. What if it was just luck that it worked for him, and we haven't heard from the dozens of finance people who are telling us that it'll all be worth it when they're billionaires and save the world?
Broadly speaking, I think most consequentialists undervalue the uncertainties of the human mind, especially their own. So I think taking a decision like this, where you go from definitely working on something good in a small way to devoting yourself to making money exclusively with the intention of giving it away, undervalues your own uncertainty about both your success and your future decision making. I think if you took everyone who told themselves 'I'll just focus on being rich now, I'll do some good later', I'm not sure you'd wind up with much of a net good in the aggregate.
"A big flaw in the general EA logic is that everyone thinks they're destined for inevitable greatness and smart enough to see the consequences of every plan."
I don't know, I'm not sure I've seen this. In fact I have the opposite experience with EA folks in that they tend to fetishize probabilistic thinking to an unhelpful degree sometimes. But more to the point I seriously doubt that SBF was certain it would work out and he would become a billionaire. In fact, in interviews I've heard with him he talks about giving FTX a ~10% chance of working out (i.e. not failing outright). But even in that case it is still a positive expected value play. If it fails, he goes back to Alameda of some other prop trading firm and continues giving a few hundred thousand a year to charity. But if it works, it could be huge and he could be giving away billions (which is what happened). Even if that latter is unlikely it is still worth doing because the payoff is huge.
More generally, I think the way you characterize the amount of certainty in the EA community is not accurate. My experience reading EA thinkers is that wrestle with uncertainty in a very clear-headed way.
I second this- one main piece of philosophy that SBF has talked about specifically in this vain is that there is decreasing marginal returns for making more money for an individual, but far, far less for trying to improve the world, so people should be less risk averse and try high-variance earning to give strategies. As a community, if we have 100 people making 350k/yr or 100 people all trying to make billions with a 1% chance of success, we'd have more money in the latter option- and he thought he had a 10% chance of success, so even better that he took that bet
The argument is that high risk earn to give opportunities like entrepreneurship are positive expected value, no that you're certain to succeed. If for every dozen EA entrepreneurs you expect one to make hundreds of millions that they give away, that's a good deal even if the other 11 fail.
In-general many EAs argue for being risk-neutral, so taking riskier bets with your career than you otherwise would if you were just earning money for yourself.
Maybe some of my antipathy comes from interacting with people in the world – I have spent a large amount of time in the humanitarian sector and a fair bit in the well-meaning-startup space – and the vast majority of the time EA was used as kind of a prosperity gospel. 'The only thing I owe to the world is to become a billionaire – nothing matters ethically beyond that. If I do, I can start being a good person. If I don't, statistically, someone else did, so I can continue giving nothing back.'
Maybe this is an unfair characterisation, but human beings being who we are, I think it's a philosophy that's very self-serving. I appreciate the probability focus the writers for EA websites are doing like – like I say, I genuinely believe in lower-case effective altruism – but what I've seen in the real world is either an Ayn Rand level of selfishness justified by deferred responsibility, or throwing money into cool-looking but useless projects in the developing world. I presume the counterargument is that since it's ineffective it isn't by definition EA, but at some point the debate starts being academic.
Pretty much everyone I know in EA is in law or finance, making 500k - 3 million a year (not billionaires) and is mostly giving money to organizations like Give Directly or others ranked very high in GiveWell. Maybe there are people like the ones you describe, but my guess is that EA is a handy excuse and if they didn't have that, they'd make up something else.
I think you're right to push back against people who say they don't need to bother doing good until / unless they become a billionaire. That's on the more extreme end of what I've heard, but arguments like this are a problem in EA and tend to be a huge turn-off to great people (like you!) who might otherwise be valuable community members. For what it's worth the EAs who I know basically universally hate Ayn Rand.
I'm curious - do you remember any examples of cool-looking but useless projects in the developing world that you've seen people throw money at?
what3words is my favourite example of absolutely terrible ideas that sound good to engineers. There have also been a couple around using blockchain for land registries which completely fail to take into account how smallholders are dispossessed (the person who can take your land by force can force you to send him your token as well). Numerous platforms of the general '[X] for [Y]' concept (LinkedIn for Refugees! Etsy for Weavers!) that never offer any advantage over existing commercial sites. Several things with drone delivery, although that one I concede may work if done correctly. There have also been any number of photo op projects with crypto dudes coming into small villages to take pictures, I'm sure I can dig some up.
To be fair, I myself have built any number of completely useless and expensive software platforms funded by governments and NGOs in the developing world – I'd say 1 in 5 were any good when I was consulting – so finding a project that has real, direct impact is hard. And I'm very critical of the aid industry as well. But a lot of the Silicon-Valley-engineer projects parachute in with lots of hype and just make no sense given the realities on the ground.
The Hollywood waitress analogy doesn't work at all. Most no-name EAs are software developers, traders, corporate lawyers, etc. who still make buckets of money even if they aren't billionaires. Tbh it's kind of hard to not make money if you're a Stanford CS graduate, which highly disproportionate number of EAs are.
From Matt's description of EA, I'm pretty sure the Chinese Communist Party would describe itself as the world's largest and most successful enthusiasts. The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat.
"The gist of it seems to be put all the wealth of the nation into the hands of an elite who then spend it on what they consider the public good while the rest of us rubes take a backseat"
I'm not sure where you're getting this but I say that this is very much not the gist of EA. Have you actually heard anyone in the EA community say or write anything resembling this?
I don't suppose SBF also has views on tax policy and regulation of crypto that he'd expect his preferred candidate to share, not to mention income inequality. Perhaps he could share those with us. The notion that the effective altruist billionaire class should determine what are the world's largest problems and their solutions instead of voters and their representatives seems laughable to me. While getting huge tax deductions payable by the rest of us for their pet causes.
There was an election - SBF didn't decide anything for anyone. And, even if elected, Flynn still had to participate in the legislative process along with all of the other elected officials.
Everyone has blind spots. And I assume most people would say "the government is stealing from poor people by taxing EAs" is a bridge too far, even if it may technically be true in some cases.
I think most EAs DON'T think that but it's pretty easy to expect some "libertarian" type will eagerly co-opt it. Humility is often a surrogate for trust and the EAs don't exactly have that.
I think a lot of it is that crypto bros can be unbelievably self-confident and annoying. No one is projecting humility from that space. I assume many SB readers have been the "young bright person at work who wants to and knows how to improve things but gets ignored" and this is a similar phenomenon.
I have another comment below with some of your points that I disagree with, but for what it's worth I do think you're right about internal status-seeking around AI safety and insularity in general being cultural problems within (some subsets of) the EA and rationalist communities. I think AI risk is a real and important problem and want the people who work on these problems to have a healthy community.
Has anyone seen an attempt to figure out how we weight the negative effects of different careers in the "earn to give" approach (or traditional philanthropy model) - that would seem necessary to make choices regarding effective altruism? Obviously it's going to be hard to agree on anything for something new like crypto, but for something like a more typical "career in finance", can we say anything specific about the impact? If I make X million from pure trading activity, then other people have clearly lost X million, resulting in negative impact for those individuals? There would also be indirect impact from supporting the general trading system and the degree to which it might push companies into making short-term decisions that may harm employees or customers, but that's going to be rather more difficult to measure (and would quickly turn into a debate regarding pros and cons of the whole economic system...)
There's also a separate interesting question of the impact of trying to maximize your income within a given company, if increased wages / bonuses for some categories of employees leads to pressure to reduce wages for others?
https://80000hours.org/articles/harmful-career/ has an analysis of "career in finance" specifically.
https://80000hours.org/career-reviews/trading-in-quantitative-hedge-funds/ also has some considerations.
<I>If I make X million from pure trading activity, then other people have clearly lost X million</I>
That’s not correct. If you think oil prices are going to fall you could sell Delta Airlines a futures contract to deliver X barrels of oil to its refinery in six months. If you’re right you can buy the oil for less and sell it to Delta at the agreed price. Delta is fine with that as they need predictability to price tickets, plane routes, etc.
I think that's true enough when we're talking about those fundamental transactions, but are we operating in a world with so much money sloshing around in the financial sector that it's effectively a different thing now than what we were taught in Econ class?
The amount of "finance" that is speculative trading activity is just not that high. You are missing all the things like routine financing, private equity (which despite the weirdly bad publicity just comes in and runs companies better to IPO or sell), and risk distribution for existing lending (eg CLOs where investors take the risk and the bank books immediately).
Realistically, though, there just aren't enough hedge fund jobs to make this a big problem.
Trading is a positive sum activity. You are finding more accurate information. If it is in fact zero sum, then people would leave the market if they’re getting beat.
We can argue about the usefulness of crypto, but the first statement isn't true. If I tell you I have a product that can turn lead into gold, and sell it to you and it does no such thing, that's not a useful product. Obviously I created something you wanted, say false hope, that might have 'utility' under some strict definition, but most people would consider the product itself as useless and the transaction as fraudulent. Information gaps prevent perfectly efficient markets.
Cryptocurrency is useful in that it very efficiently identifies people whose opinions I can safely ignore.
(it is worse than existing techniques for *everything* else)
"You can't trick millions of people for over ten years."
*Atheists have entered the chat*
I'm very frustrated when the "heart" function doesn't work (maybe someone can explain why).
Anyway, Tdubs: "heart."
Substack has some sort of lag problem in its UI between the clicking of the heart (which does get registered immediately) and the display of showing that you clicked the heart. It also seems to be worse on non-top level comments. My advice would be to just click it once and wait, as clicking it again might undo the like.
I wish that I had more than one heart to give friend.
Traditionally people probably didn’t really believe the stuff they showed up to church for. They certainly didn’t read their scripture or understand their Latin speaking priests - the idea that you’d even want to do that is an invention of Protestantism. (and the idea that Buddhists did that comes from them importing Protestantism too)
Traditionally the church in England 1. told people not to have sex the wrong way 2. was used to fundraise constant parties for “saints’ days”.
Has had the unfortunate historical precedent of being harmful to those who "don't* believe it, though.
I'm not anti crypto, call me neutral on it, but you just argued crypto is useful because people (specifically yourself?) made money on it. Not sure that logically computes...
> There are no information gaps in crypto
Well, there are because essential components like connections to the real world can’t be “on chain”. Exchange behaviors, side bets, whether a project is a rug or not, that kind of thing. The big one being Tether, who wasn’t public at all until NY sued them and now seems to be lying about their assets.
Plus, public smart contracts regularly get hacked for huge amounts so it doesn’t seem the auditing systems work very well. I don’t think it can ever be safe to build on a system like this where mistakes/bad transactions can’t be reverted. The ability to do that is more important than any tradfi technical drawback.
The existence of social security is proof that you can trick hundreds of millions of people into supporting a Ponzi scheme for decades.
'. . . the fact that it is still going pretty strong after more than ten years and has survived several major crashes shows that it is not a Ponzi scheme.'
It does not show that.
When people say crypto is a Ponzi scheme, I don't think it's meant literally; it's a metaphor. There may be some inherent value to the technology, but it appears to be wildly overvalued in the market at the moment, having a bubble is something that can persist for a long time, and people who are invested in the bubble have a material interest in keeping the bubble inflated.
For example, Tesla stock seems to be overvalued by conventional measures, even with the haircut it's been taking lately. That doesn't mean the company itself is running some kind of scam; the people I know who own one are very happy with the product. There's just an apparent disconnect between what the company is doing, and what the investors in the stock are doing on the secondary markets.
There is a tedious debate many people have about 'what is a Ponzi scheme'. Crypto-skeptics often say 'crypto is a Ponzi scheme', and crypto-enthusiasts then dispute that it doesn't meet various criteria to be considered a Ponzi scheme.
To be honest, this debate does not interest me very much. Things change, and new things are made, and not everything is an exact analogue of some past thing. The relevant point is that crypto assets largely do not represent sources of wealth exogenous to the system of their trading (in the way that stocks, bonds or commodities do). Consequently, what crypto investing shares with a Ponzi scheme is that there is a broadly fixed sum of capital invested, and that what Person A gets in profit comes from loss from Persons B, C and D. In that sense, it is analogous to a Ponzi scheme.
Madoff’s never actually crashed and probably could’ve run for a long time. The government just spoiled it by telling everyone it was a Ponzi.
I would be amazed if the percentage of total current crypto wealth owned by people in "imperialist" countries is lower than the percentage of world gdp that those countries represent. So your best case scenario is we've created a new paradigm that exacerbates historical wealth inequality?
In order to generate consumer surplus and decouple wealth from land, technology has to actually *generate value*.
Distributed ledgers *may* someday do so in private applications.
The public ones which underpin current cryptocurrencies do not. They have value only insofar as a tulip bulb once did, as a vehicle for speculation.
I agree that *technology* is a good thing, but not all *technologies* automatically are.
> Distributed ledgers *may* someday do so in private applications.
I think they have a genuine mindset advantage in that you can easily delete/overwrite data in a traditional database and it’s difficult to replicate some of them. Immutable data is very useful for correctness and programmers aren’t taught to use it enough.
…but all that technology was invented in the 70s and “blockchain” reinvents it in the least efficient way possible plus tries to add a smart contract language designed by amateurs that didn’t think about things like integer overflow first.
Interesting. Can you link to anything to read on this?
Maybe it’s “not about crypto” but if the biggest name in applied consequentialism right now is, essentially, in the business of marketing Ponzi schemes to the gullible as his day job, that seems kinda relevant to me when approaching the question of how much weight I should assign his opinions about existential risks?
It’s an interesting new development. Up until a year or two ago I would have thought of Bill Gates as the one famous donor who was most explicitly affiliated with Effective Altruism, though he tries to cover it up in order to seem more normal.
Is he explicitly affiliated? I always took his philanthropic actions as a rather banal "help the world's neediest" without all the philosophical forays described in this article.
Not quite explicitly. But even before the official Effective Altruism movement got started, the Gates Foundation got into the idea of evaluating their charity by concrete metrics. And Gates and Buffett created the Giving Pledge, inspired at least in part by Peter Singer.
It is just a completely different thing to try to objectively measure the performance of a charity versus what the EA people do. The difference is in the EA philosophy of what outcomes you consider desirable or how you weight their desirability. The Gates approach is perfectly understandable to a normal person, the EA part is not. That being said, I have no idea how much Gates buys into the EA part or not.
I don't think that's true. "Objectively measure the performance of charity" is exactly what GiveWell does, which is probably the most widely known and influential EA organization in the world.
In fact, "objectively measure the performance of a charity" is a pretty good description of what EA is. It's just that words like "measure" and "performance" aren't so clear when you try and dig into the details. Performance relative to what outcome? What are we measuring exactly? And so we get a lot of internet arguments about what outcomes we should care about (including how much to care about people in the far future, etc).
I think my post was unclear. The philosophy of the EA movement that we should not place any more weight on our own community comes in addition to the attempt to objectively evaluate charitable impact. But I believe these are completely separable ideas and the EA specific one is a completely foreign one to most people, while the objective evaluation is perfectly normal to them.
But I see that as the sleight of hand that EAs use to nudge people toward their preferred causes. I may be determined to give a non-EA-preferred cause, but I would also like to know which charities have the better performance in achieving that non-EA-preferred cause.
Most famous definitely. But while he's a lot less famous, Dustin Moskovitz and Cari Tuna were the main big pure EA donors previously.
Elon is the most famous person to be literally associated with the religious EA people - he got his e-girl musician girlfriend by making a LessWrong joke at a party and founded OpenAI to literally do their silly ideas about evil computer defense. OpenAI of course abandoned that mission because it has nothing to do with any real life concerns.
...four months later, this comment aged pretty well...
The vast majority of capital in crypto is not from retail investors.
...so? If you think that institutional investors are immune from the delusions of crowds, I have a CDO bond that I'd be happy to let you in to the AAA-rated (by Moodys!) top tranche of.
I think you are entirely overconfident if you think it is obvious that institutions are being gullible by investing in crypto. And even if you are right "its not fair this guy is ripping off Wall Street and we shouldn't trust him" is a super weird take.
- I have no particular insight into how much institutional exposure to crypto there is, and I very much hope it's "less than I fear"
- but I fear it's a lot: nobody knew how overextended on forex trading Barings Bank was until Barings ceased to exist and that was _one rogue trader_. One of the major morals of 2007 was, for me, that no compliance department on god's earth _actually_ knows what their trading desks are getting up to, and the ratings agencies will rubberstamp known bilge without a second thought.
- "ripping off wall street" is all fun and games until it tanks the entire economy taking your job and your retirement savings with it, which already happened once in the last 20 years and I'm not looking forward to Round Two: Crypto Boogaloo
unless you were dumb and pulled all your money out at the bottom you didn't lose all your money in the market at all in the last 20 years.
Being able to ride out a market crash because you're in your 20-40s and have a stable job and can buy the dip is great. (Source: me, stayed employed in tech through the 2001 crash and the entire Great Recession, was definitely preferable to all the alternatives.) But allow me to introduce you to the extremely salient concept known as "time", which can crop up in two rather important ways:
- if you are approaching or at retirement age, having your pension/401k/IRA wiped out is, in fact, a pretty big deal: your heirs might get to ride the bounce back but there's every chance that you won't.
- rent (or your mortgage payment) is due on the first of the month and they don't care what happened to your retirement savings, nor do they care if you lost your job because of an economic crash. Similar dynamics apply to your grocery and heating bills.
I feel like "massive economic crashes are bad" should not be proposition needing to be re-litigated after 2008 but apparently here we are?
I have really only come into contact with EA through Slow Boring, but, to be honest, they seem like a bunch of super creepy weirdos.
The no community ties thing is a nudge away from discouraging the formation of friends or families.
The "earn to give" thing has strong "prosperity gospel" vibes, where all sorts of shenanigans are self justified.
And the AI thing is just sorta weird.
I’m not sure if the EA philosophy leads to this, but the larger rationalist community does act like they’ve accidentally logiced themselves into joining a cult. They like to live in group homes, are into polyamory because they couldn’t think of logical arguments against it, and do sometimes join 100% literal EA cults like Leverage Research.
Beyond unhealthy absolutist philosophy, I think joining cults might just be what people do when they live in Berkeley. It’s in the water.
This is a depressing comment.
"I spend lots of time and money trying to actively improve the world and improve thinking about same".
"Sounds like outgroup low status stuff to me, CREEP."
The lack of community ties thing troubles me as well. Yes, EAs say that you can still have friends and family, but only because people would burn out if they didn't. Which strikes me as abhorrent. My relationships with others are deeply important and morally justified, they aren't simply a means to end of giving me the emotional bandwidth to work longer hours so I can donate to EA charities. I think people are deeply entitled to pursue and value particular relationships and projects in their lives for their own sake. No exclusively of course, I do think everyone should try to devote some of their efforts to making the world better in a consequentialist way, but it's only one aspect of the good life.
While there are some lovely things at the abstract philosophical level with EA, what would make the world a far better place is if we could somehow convince everyone to increase their normal charitable giving 10%.
You know, kind of a "slow boring" thing.
There may be some who say this. But I think there are others who say that friends and family are what make life worth living and worth saving, and that you are a person just as much as anyone else is, and that you deserve as much time with your friends and family as anyone else. But if you can help a hundred other people have more quality time with their friends and family, it may well be worth sacrificing some of your own for that.
I don't think any EAs think that you should give up on having friends and family. I think it's clear to everyone that that would be an unhealthy recipe for disaster.
EA reminds me a lot of those who believe that business should maximize its profits and no other objective is morally justified(1). Both are principles with a lot of heft behind them that are also overly simplistic and undermined by a lack of curiosity and humility.
Pursuing profits drives economic efficiency, which does a lot to improve social welfare, even if it also generates lots of negative externalities. But some take it to imply that promoting employee welfare at the expense of profits is unwise without even asking whether “investing” in employee welfare can generate a long-term return. Similarly, pursuing EA would probably improve the world greatly. But it takes incredible hubris to believe those a calculation of the expected harm from AI, based on no data, is correct and a greater potential harm than present issues like the impact of air pollution on health and inequality
What makes both camps seem like “weirdos” is the way that they are so simplistic about the world. The human instinct that things like social ties matter should be respected and those who don’t should rightfully be looked at askance.
1. I’m not supporting the “Friedman Doctrine” that supports profit maximization based purely on the idea that it’s the only way to represent the interest of owners, but the more general idea outlined above.
It strikes me a lot like the rationalist community. Many of them make good points, and society would benefit if most people moved on the margin towards their stated tenets. However, lots of in-group status seeking, lack of nuance, homogeneity, and a complete lack of humility makes me nervous about fully buying what they're selling. Personally, I'd prefer EA people to donate and encourage others to donate than rum the government.
EA and the rationalist community are the same people, aren’t they?
(Meta-rationalism ala David Chapman is much healthier.)
There's a lot of overlap, especially with adherents, but I wouldn't say they're the same, since they're about different things. Personally, I'd describe myself as EA-adjacent, but the rationalist community creeps me out.
Yeah, I was specifically thinking about Julia Galef, who seems to be Matt’s introduction to EA but is (or used to be) one of the LW style rationalists who thinks you shouldn’t use your brain normally and should instead explicitly calculate Bayes’ theorem on everything, or possibly just pretend you’re doing that and say the word “prior” a lot.
https://metarationality.com/bayesianism-updating
He’s also talked about SSC, who is eh, good on his professional area and isn’t too deep into the religious aspects of that community, but has a strange personal life mission to get everyone to learn about his weird online friends who invented versions of conservatism nobody else believes in.
In fact, the big problem in the EA community now is that they've been colonized by rationalists.
I have noticed that. Was it always the case?
No, it started as philosophy grad students eating only rice and beans to give more money to Oxfam because they were persuaded by Peter Singer. I don't really know the intellectual history but I think the shared fondness for thought experiments about the far future plays a role.
I don't really disagree with any of that, and the first EA stuff I heard about was largely inline with that.
But, increasingly my impression is that the self defined EA group are high on their own supply.
And as a guiding philosophy for representative democracy it makes no sense, "Vote for me as your Rep. and I will assign no priority to you or your community's interests whatsoever. "
On the other hand "vote for me and I will focus on substantive, evidence-based policies to improve your life rather than empty ideological posturing" sounds like a good sales pitch for a politician.
It *sounds* like one, but I don’t think it always sells. Trump’s election pitch was much more “vote for me and I will ideologically posture on your behalf against the nerds that aim for evidence”. People care a lot about symbolism.
Somewhat, but I think a big part of Trump's appeal was that he was a "businessman" who knew how to get things done. Obviously in office he leaned much more heavily on the ideological posturing but I think people generally underrate how much people voted for Trump in 2016 because they though he would be effective.
Like Andrew, I don't disagree with this, but what bothers me is I think most EAs only advocate for the 10% rule because it's more likely to persuade people to get on board, not because they think only giving that much is sufficient. They would say that you should give up your relationships to focus more on earning to give if you thought it would be sustainable for you. I've explored the (online) EA community, and most of them *are* dyed-in-the-wool act utilitarians. So while I'll happily donate money to Give Well charities, I'm suspicious of the broader movement.
In my experience, most EAs are realistic that doing something like donating 50% of your income is not sustainable or psychologically doable for most people. But consider that there are large numbers of people working for normal non-EA charities who are effectively taking huge pay cuts because they want to do good. A lot of people genuinely want to devote their life to making the world a better place but aren't sure how. It's good that EAs are clear-eyed about it.
EA seems like a complete political non-starter. It combines the worst elements of neoliberalism (cold, reductive focus on efficient generation of dollars/hedons) and progressivism (esoteric and unpopular views held predominantly by people with a college degree).
You forgot the neo-feudalism and pseudo-religiousity.
We’re basically talking about the nobility giving out boons to deserving members of the peasantry in exchange for indulgences regarding how they came to be nobility.
You are implicitly viewing wealth accumulation as coming at the expense of someone else. Thankfully, it doesn’t. Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants.
So, something like earning to give is good, *even if you never give!* It is far better to do something productive, that people actually *want*, than to work a personally satisfying but low paying job.
My other comments at SB offer sufficient clarity on my thinking that I don't feel obliged to explain that I do, in fact, understand the concept of value generation.
"Unless you are taking from someone through force, the *only* way to get wealth is to provide goods or services that someone else wants."
But this sentence is sufficient to prove to me that you prefer to exist in some abstracted theoretical realm in which "lobbying" and "rent-seeking" are not concepts which run rampant through the American economy.
The EA people are a mix of professional class types looking for an endorphin hit (fine, better to get it helping poor people than snorting coke, but stop being a preachy fuck about it) and the rich seeking yet another reason to claim their iron hold on the American economy is in fact a good thing.
Also, in support of my initial response, you are explicitly on-record saying that policies which support a decent standard of living for first world citizens are bad because they take money away from wealthy EAers who will donate it to third world citizens.
I quote: "Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?"
Since you've already taken the extreme-to-the-point-of-parody view and put it on record, I think we're done here.
Not so - lowering taxes so that more gets donated overseas is a very inefficient way of doing that. I am arguing that we should have *massively more* foreign aid spending (as well as completely open borders.)
Nice try.
Here's the whole exchange surrounding that quote:
ME:
EA is noblesse oblige for modern wokeists.
“It’s my duty to earn as much as possible by any means because I have the vision and wisdom to disburse funds according to the interests of the greater good.”
It says nothing of a duty to create a society in which everyone is earning a decent living through the fruits of their own labor, instead of relying on the rich for oh-so enlightened, “targeted”, “optimized,” handouts.
That latter outcome is something only policy and politics can achieve.
FRIGID:
You put the finger on my general unease at WA, but which I couldn't articulate. And this is why I prefer predistribution vs redistribution. Change the rules of the economy such that work is rewarded vs hope the rich toss a few pennies downwards.
ME:
Policies that do that are actively *bad* in the current EA framework because they redirect income from donation-minded rich westerners to middle-income and working class westerners who are, by global standards, already rich.
It’s a terrible, profoundly fucked ethical framework on a macro level.
YOU:
Of course those policies are terrible. Is it not so that spending in very poor countries has a greater positive impact per dollar? If we grant this as so, then the only justification for redistributing from the very well-off to the somewhat well-off is if we say that Americans are simply more important than foreigners. I find that a deplorable (if common) sentiment. Do you believe the lives of Americans are worth more than the lives of Africans?
Again, you're explicitly on-record opposing policies that will improve quality of life for American/European workers because it will redirect dollars from (in your mind) ultra-poor EA donor recipients to rich-by-global-standards working class Westerners.
But so would curtailing rent-seeking by capital in the US, so would regulating crypto effectively, so would introducing a payroll tax-funded universal healthcare scheme...
I’m trying to avoid replying to you, but this is just too obvious for even you to fail to understand:
The job of avoiding neofeudalism should not be left in the voluntary hands of the would-be neofeudalists, it should be forced upon them by the citizenry using the power of the state that they’re so desperately attempting to buy off at every turn.
EA isn't libertarianism, it doesn't imply that the state shouldn't also be redistributive. EAs generally support the standard laundry list of Democratic Party policies, but think e.g. foreign aid and asteroid prevention should be given more money.
Given that, it's only "voluntary" in the sense that all personal morality is "voluntarily." Of course people can choose not to follow it, but then they're not EA in any sense.
And yet, those “neofeudalists” do exist, so wouldn’t it be better if they voluntarily contributed wealth to good causes?
EA is dead in the water because it's a thin veil for the bad guys, duh.
Virtually no one gives away enough to crimp their kids' ability to be at the top of the heap, EA is just another of many figleaves and/or outright bribes designed to keep the rest of us from taking it all from them after they die.
Which is, to my mind, an inevitability. It's soon going to be near-universally understood that permitting the intergenerational accumulation of wealth above a level sufficient to, say, generate a few multiples of the median income in passive income, is directly incompatible with democratic governance and the general well-being. At which point "inheritance taxes" are no longer going to be a question of revenue but of self-defense.
EA is yet another attempt to hold back the tide on th