In defense of interesting writing on controversial topics
Some thoughts on the New York Times' Slate Star Codex profile
This is a free preview of Slow Boring. If you like what you read, consider supporting this site by subscribing.
Some time ago, Scott Alexander, the pseudonymous author of the Slate Star Codex blog, announced that he was abandoning his site. The reason was that a New York Times reporter had been in touch with him explaining that he was doing a profile of the blog, and in the course of writing it he was compelled by some NYT policy to disclose Alexander’s real name.
Alexander is a practicing psychiatrist and felt that for reasons of professional ethics, this would jeopardize his job — so he shut the blog down in a rather dramatic fashion. This, in turn, led to a lot of condemnatory rhetoric from his fans and admirers, many of whom work in the technology industry and had a set of preexisting grievances with what they call “the media” and what I would call “the technology coverage of a half dozen outlets, notably including The New York Times.” Much of the ensuing rhetoric from the pro-SSC camp (though not from Alexander, who if anything is even-tempered to a fault in his public persona) struck me as overheated, but I was sad it had driven Alexander from the internet.
The good news is that he has since resurfaced on Substack at Astral Codex Ten. He also has a new job as the founder of Lorien Psychiatry, an innovative effort to use telemedicine to make psychiatric treatment much more affordable.
Then, on Saturday, Cade Metz’s NYT article about SSC finally dropped. And it’s terrible. (Read Alexander’s post about it, though I think he has too much of a conspiratorial view of this.1)
I tend to think that too much time and mental energy is expended, including by me, on critiquing bad articles, and not enough time and energy is spent on praising good ones. So I feel kind of bad about writing a detailed criticism of a single bad article. But, given the larger context in which this story appeared, my sense is it’s going to become a flashpoint for a whole bunch of interesting struggles, so I think it’s useful and informative to say what I think.
A tortured premise
On its face, the idea of profiling an obscure blog written by a pseudonymous psychiatrist that has a surprisingly high-clout readership is perfectly good. Alexander’s readers include many Silicon Valley people, including — as Metz details — some very high-ranking executives. It’s an interesting story.
But I think Metz kind of misses what’s interesting about it from the get-go.
Ross Douthat reads SSC, and so does Ezra Klein.
David Brooks has quoted him in The New York Times.
Tyler Cowen praises SSC and the larger “rationalist community” that it was a flagship publication of, but also critiques them, saying “I would approve of them much more if they called themselves the irrationality community.”
As you’ll see in Metz’s story, the Vox writer Kelsey Piper is an SSC reader and a rationalist. But she works primarily for Vox’s Future Perfect vertical, which is a whole rationalist-inspired cornucopia of content.
In other words, this is an intellectual movement that’s somewhat influential in highbrow circles broadly, and that deserves to be situated as such. Well-known books like Toby Ord’s “The Precipice” and Philip Tetlock’s “Superforecasting: The Art and Science of Prediction” are important parts of the rationalist firmament. There’s also Julia Galef’s excellent podcast “Rationally Speaking” on which you can hear me yacking.
There’s a lot more going on than “some tech executives read this blog,” in other words.
But Metz does not seem interested in actually exploring rationalist ideas or understanding their content or the scope of their influence. Instead, the article is structured as a kind of syllogism:
Scott Alexander’s blog is popular with some influential Silicon Valley people.
Scott Alexander has done posts that espouse views on race or gender that progressives disapprove of.
Therefore, Silicon Valley is a hotbed of racism and sexism.
One time years ago, I went to Silicon Valley for a few days. As a white guy, I would not be well-situated to assess the extent to which it’s a hotbed of racism and sexism anyway, so I won’t comment on the conclusion. But the logic is specious, and the whole thing is an incredible missed opportunity to help people understand some valuable and interesting ideas.
Rationalism as I understand it
When I first heard about rationalists I was intensely confused, because in college I took a few different classes that involved reading “rationalist” philosophers from the Early Modern period, and contemporary rationalists’ ideas are totally unlike early modern philosophical rationalists’ ideas.2 I should also note that contemporary rationalism is both a set of ideas and also a specific community in the Bay Area that, as I understand it, involves polyamory and communal living of some kind. I don’t have any real knowledge or understanding of the latter and am just talking about ideas.
Rationalists’ big thing is that the natural human process of cognition is capable of reaching accurate results, but that’s not really the default mode. And rationalists are not just aware of this — they think it’s a big problem, and they try really hard to push back on it and develop better reasoning skills.
I think that’s probably why a rationalist blog is very popular in Silicon Valley. The nature of VC investing or the management of an early-stage startup is that there is incredible monetary value in making correct predictions in the face of imperfect information. Then on top of that, the kinds of recommendations that rationalists give for how to reason better tend to align with engineers’ natural instincts and inclinations: be more bloodless and objective, evaluate claims on the merits in isolation, gather and surface all available facts.
This thing I did where I tried to list specific predictions with specific probabilities so I can go back and check to see how wrong I was is a rationalist idea.
But of course, there’s more to it than predicting. The key to Metz’s point is that part of the practice of rationalism is that in order to do it effectively, you have to be willing to be impolite. Not necessarily 24 hours a day or anything, but when you’re in Rationalism Mode you can’t also “read the room.” A rationalist would say that human psychology is over-optimized for reading the room, and that to get at the truth you need to be willing to deliberately turn off the room-reading portion of your brain and just throw your idea out.
Compared to the public masses, the biggest difference between rationalists and everyday Americans is almost certainly that Americans are very religious.
But a quirk of American life is that even though most Americans say they believe in God as described in the Bible, nobody thinks it’s interesting to argue about this. By contrast, lots of people like to argue about race and gender issues. But in progressive circles, it is common to observe the norm that because the struggle against racism and misogyny is important, it is impolite to dissent from an anti-racist claim or argument unless you have some overwhelmingly important reason for doing so.
Rationality vs. manners
This exchange between Conor Friedersdorf and Chris Hayes on the merits of arguing about the San Francisco School Board illustrates the progressive norm, upheld here by Hayes.
To an extent, I disagree with Hayes about this specific case3, but I accept the basic force of his logic. Hayes, as a prime time cable television host, has a kind of power in our society that it’s incumbent on him to wield wisely. And while I think wielding that power wisely entails not lying to people, it is compatible with discretion about what truths one chooses to speak. But even though rationalists will understand the strategic logic of this kind of argument as well (if not better) than most, the practice of rationalism requires setting it aside.
You see this on display in a post Metz criticizes titled “Gender Imbalances Are Mostly Not Due To Offensive Attitudes.”
In the (liberal, coastal, urban, very political) circles that I travel, everyone (especially parents) knows and acknowledges that men and women are, on average, different in ways that end up mattering for the distribution of outcomes. But everyone also believes that sexism and misogyny are significant problems in the world, and that the people struggling against those problems are worthy of admiration and praise. So to leap into a conversation about sexism and misogyny yelling “WELL ACTUALLY GIOLLA AND KAJONIUS FIND THAT SEX DIFFERENCES IN PERSONALITY ARE LARGER IN COUNTRIES WITH MORE GENDER EQUALITY” would be considered a rude and undermining thing to do. This is just to say that most people are not rationalists — they believe that statements can be evaluated on grounds beyond truth and falsity. There is suspicion of the guy who is “just asking questions.”
Annie Lowrey even published a piece in the Atlantic denouncing the “Facts Man” on precisely these terms:
Sometimes, Facts Man is less about truth than raising questions. Why can’t Facts Man talk about certain issues in exactly the way he wants to? Why can’t Facts Man bring up scientific facts relevant to other people’s humanity without getting called out for it? Why can’t Facts Man make obscenely offensive conjectures about life-or-death issues? Where’s the open debate? Why does Facts Man have to genuflect to other peoples’ identity politics? Facts Man himself has no identity politics! He is an individual, as unique as a snowflake, but certainly not as fragile as one.
Personally, I find myself somewhere between Lowrey and Alexander on this. The pure vision of the rationalists and the belief that statements could or should be read devoid of context or purely literally strikes me as untenable. But I think that in the Trump era, journalism as a whole has tilted too far in Lowrey’s direction, with too much room-reading and groupthink and not enough appreciation of the value of annoying people with inconvenient observations.
The radicalism of effective altruism
Metz tosses off the idea that “many Rationalists embraced ‘effective altruism,’ an effort to remake charity by calculating how many people would benefit from a given donation.”
I think if you want to understand rationalism and its nexus with Silicon Valley as a movement relevant to politics, you have to understand effective altruist thinking. And you have to understand it correctly, because it’s honestly much more controversial than anything Metz critiques in his piece. What EAs think is that people should make decisions guided by a rigorous empirical evaluation based on consequentialist criteria.
GiveWell, the effective altruist organization that I’m most aware of, currently recommends nine charities. One gives anti-malaria medication, one gives insecticide-treated bed nets to prevent malaria, one gives vitamin A supplements to African kids, one gives cash incentives for routine childhood vaccinations in Nigeria, several do de-worming programs in different African countries, and one does direct cash transfers to low-income households in Kenya and Uganda.
In other words, effective altruists don’t think you should make charitable contributions to your church (again, relative to the mass public, this is the most controversial part!) or to support the arts or solve problems in your community. They think most of the stuff that people donate to (which, again, is largely religiously motivated) is frivolous. But beyond that, they would dismiss the bulk of the kind of problems that concern most people as literal “first world problems” that blatantly fail the cost-benefit test compared to vitamin A supplementation in Africa.
Effective altruists also believe that you should give much more to charity. Alexander is an advocate of the Giving What We Can pledge that urges secular people in the developed world to give at least 10% of their income to charities that address problems afflicting the global poor. In other words, it’s tithing, but it’s for secular people and aimed at lifting up the poorest people on the planet rather than local community institutions. If you want to think of this as a political program, it’s a radical political program, and if you’re not centering it in your understanding of rationalist politics then you are very much missing the forest for the trees.
Neither left nor right
Metz is very interested in painting Alexander as racist, writing for example that “in one post, he aligned himself with Charles Murray, who proposed a link between race and I.Q. in ‘The Bell Curve.’”
It is true that he did that, but if you read the post, he was aligning with Murray’s suggestion that we should have a universal basic income to reduce poverty, not with Murray’s ideas about race and I.Q. That’s particularly odd, because it’s not difficult to find Alexander expressing genuinely controversial views about race over the years. Earlier, for example, I alluded to a post in which Alexander urged people to commit to giving 10% of their income to charities that support poor people in poor countries.
Sounds like a nice guy, right?
But here’s an excerpt from that same post, titled “Nobody is Perfect, Everything is Commensurable”:
Five million people participated in the #BlackLivesMatter Twitter campaign. Suppose that solely as a result of this campaign, no currently-serving police officer ever harms an unarmed black person ever again. That’s 100 lives saved per year times let’s say twenty years left in the average officer’s career, for a total of 2000 lives saved, or 1/2500th of a life saved per campaign participant. By coincidence, 1/2500th of a life saved happens to be what you get when you donate $1 to the Against Malaria Foundation. The round-trip bus fare people used to make it to their #BlackLivesMatter protests could have saved ten times as many black lives as the protests themselves, even given completely ridiculous overestimates of the protests’ efficacy.
This is an extremely hot take! And while I would not say this paragraph is typical of SSC content, it does a good job of expressing the SSC view toward most of what passes for politics in the United States of America — that it doesn’t matter at all.
The country has been torn apart over the past five to six years by a running argument between people on the left, who believe that remedying systemic racism as manifested in the law enforcement system is an incredibly important issue, and people on the right, who believe that the left’s failure to support our law enforcement heroes is a crisis that threatens to unleash anarchy across the country. Alexander’s view is that this is all incredibly unimportant, and you should give your money to cost-effective public health charities in Africa — not just as a superior way of demonstrating that Black lives matter, but as a form of moral engagement with the world that is all-around superior to caring about politics.
The people involved in this community are mostly secular, highly educated, and non-patriotic, so I bet they’re overwhelmingly Democrats if they vote. As the primary axis of cultural conflict has shifted from sex-and-religion to race stuff, some of them are probably becoming a bit dislodged. But if we pivoted back to a sex-and-religion-focused politics (say, if the Supreme Court overturns Roe v. Wade) or flipped back to a focus on immigration, they’ll probably fit in better with Democrats again.
A valuable contribution
I would not endorse the SSC worldview.
I am obviously not nearly that disengaged from politics. More broadly, having done my time in the philosophy major salt mines, I find the level of abstraction involved in rationalist discourse a little untenable. You wind up in a situation in which because there are so many chickens on the planet and chickens are typically raised in deplorable conditions, minor improvements to the living standards of chickens weigh very heavily in the universal moral calculus. However, exactly how overwhelmingly significant a small improvement in chicken welfare is will end up hinging on the precise math of how you do the chicken-vs.-human utility calculus.
Similarly, rationalists believe that existential risk — the kind of risk that would lead to human extinction — is overwhelmingly important, so incredible stakes can rest on the question of whether there’s a 0.1% chance or 0.001% chance of something ending in human extinction.
I’m happy someone is pursuing these questions, but I don’t find the contemplations of these extreme scenarios to be particularly enlightening.
But if you want to know about rationalism and effective altruism’s influence on Silicon Valley, I think it’s useful to look at MacKenzie Scott, who is giving away her share of the Bezos Family Fortune in a very particular way:
After my post in July, I asked a team of advisors to help me accelerate my 2020 giving through immediate support to people suffering the economic effects of the crisis. They took a data-driven approach to identifying organizations with strong leadership teams and results, with special attention to those operating in communities facing high projected food insecurity, high measures of racial inequity, high local poverty rates, and low access to philanthropic capital.
The result over the last four months has been $4,158,500,000 in gifts to 384 organizations across all 50 states, Puerto Rico, and Washington D.C.Some are filling basic needs: food banks, emergency relief funds, and support services for those most vulnerable. Others are addressing long-term systemic inequities that have been deepened by the crisis: debt relief, employment training, credit and financial services for under-resourced communities, education for historically marginalized and underserved people, civil rights advocacy groups, and legal defense funds that take on institutional discrimination.
This is what I think most Americans wish billionaires would do with their money. Scott, working with a good team, is identifying well-regarded organizations that are addressing highly salient problems in our communities.
A rationalist would say that’s exactly the problem. This is a philanthropic process that is implicitly designed to maximize praise and admiration for the donor rather than to maximize positive influence on the world.
A very different approach is what Cari Tuna and Dustin Moskovitz are doing with their smaller fortune and the Open Philanthropy Project. This is decidedly less apolitical than Alexander’s views, but clearly reflects rationalist thinking. They say that when selecting causes, they look at these three factors:
Importance: How many individuals does this issue affect, and how deeply?
Neglectedness: All else equal, we prefer causes that receive less attention from other actors, particularly other major philanthropists.
Tractability: We look for clear ways in which a funder could contribute to progress.
Neglectedness in particular is a critical point of contrast. If you want to be praised by other members of the community, you should donate to causes that lots of people are already talking about. The fact that other rich people may already be donating to them only makes it better — some of the people who will be praising you will be rich and powerful. The neglectedness view is that an additional $1 million is unlikely to make a difference in a heavily funded area like climate advocacy, but could make a huge difference in some other space that isn’t already full of money.
I’m a little biased here. The Open Philanthropy Project funds five areas of U.S. policy advocacy — criminal justice reform, farm animal welfare, macroeconomic stabilization policy, immigration policy, and land use reform — and obviously, I am very enthusiastic about several of those areas. But to me, this point about neglectedness is really insightful and important. Metz names Stripe CEO Patrick Collison and Paul Graham as SSC readers, and I don’t think it’s a coincidence that they are among those funding the Fast Grants initiative, which worked on the principle that “science funding mechanisms are too slow in normal times and may be much too slow during the COVID-19 pandemic.”
The basic idea was to give out modest-sized grants to qualified researchers working on COVID-19-related matters and to send the money out really fast with minimal hassle and paperwork. It’s a cool and innovative idea that, like Open Philanthropy, involves trying to think more critically about the way grant-making happens and doing things other than rushing into the exact same fray as everyone else.
It’s good to read things
Long story short, I am neither a fully on-board rationalist nor Slate Star Codex diehard, but I liked the blog and I enjoy its successor blog, too. I highly recommend it to you.
And critically, by “highly recommend it to you” I do not mean “I agree with all the takes.” I think contemporary society is willing itself into a state of incredible stupidity by wanting to evaluate the worthwhileness of reading something purely on the basis of whether or not it’s correct. When I was in high school, I used to like to peruse issues of National Review, The Weekly Standard, The New Republic, The Nation, and Mother Jones at the library. I would learn new things in every issue. And it was a good habit to get a wide range of takes on politics.
Now in the internet era, we have too much content to be completists like that.
But even more so, social media incentivizes the wrong kind of reading. Today you read someone from a rival school of thought in order to find the paragraph or sentence that, when pulled out of context and paired with a witty Twitter quip, will garner you lots of little hearts. I’m as guilty of doing this as anyone. A lot of very smart people have poured a lot of time and energy into making you want to collect those little hearts.
That said, the way you learn things and get smarter is to read strong writers and try to understand what they’re saying — not by trying to pick it apart for clout or finding ways to caricature and snark about it. Instead, try to understand what it is the writer is saying and why people believe that. A really good recent-ish example of this was Cory Robin’s “The Enigma of Clarence Thomas,” which tries to really engage with and explain the content of Thomas’ judicial and political thought, not to “debunk” it but to elucidate how a certain style of racial pessimism can be leveraged to support very right-wing views.
The other day I wrote something critical about Ibram Kendi’s take on the achievement gap, but I sandwiched it with overall praise for his book. Some people in the comments took me to be deflecting or protesting too much, but I really think everyone should read it. It’s important to read strong writers with big, influential ideas and understand what they’re saying. I learned an incredible amount from Robert Nozick as a professor because he was a brilliant man with unusual ideas, and the value of the experience is not summed up by the fact that I still think libertarianism is kind of ridiculous.
I have never in my life identified as a “free speech absolutist” and I hope I never will. But something about the internet is making people into infantile conformists with no taste or appreciation for the life of the mind, and frankly, I’m sick of it.
In general, I get the sense that the rationalist community has some ironic lapses of rationality when it comes to navel-gazing, notably including a preoccupation with the views of people who are socially proximate to rationalists rather than to views that are objectively influential.
It’s actually almost the opposite — today’s rationalists are empiricists!
Progressive journalists have more practical influence over political outcomes in blue cities than we do over national politics, so it makes sense to focus our attention not just on things that are important but on things where what we say will plausibly matter.
About 12 years ago, I was facebook friends with a handful of garrulous libertarian types, and I had a great time debating with them, as I was (and still am, really) a liberal institutionalist. Debating with them really forced me to take my beliefs out of my gut and start to build more stringent rational/ethical/philosophical underpinnings, and I learned quite a bit. These guys were brilliant debaters, and were absolutely superior in logical reasoning and forming rational arguments than I was.
In 2011, they invited me to join a facebook group that was called "The Right Stuff". It was filled with even more garrulous libertarian types. At first, a few were making what I thought were ironic philosophical arguments for pretty abhorrent stuff, and I was entertained because I thought it was good to attempt a defense of enlightenment, liberal values against fascism/monarchism/anti-semitism/explicit racism. But two things happened.
One, is I found it impossible to articulate a purely rational basis for human rights. I realized that "rationalism" is actually downstream from moral values, which are inherently irrational (in an existentialist sense, in that we decide what is good and what is bad and start reasoning from there). As such, I would routinely get destroyed in these debates because I would mistakenly assume that the folks who were advancing fascist/dark enlightenment ideas had similar first principles that I had.I was unable to articulate a more "rational" defense of enlightenment values, and I admit I was getting slapped around by rhetorically superior fascists. This leads to the second thing.
Second, I realized pretty fast that for a not trivial number of The Right Stuff members, there was no irony at all. They were really fascists, and when they talked about racial superiority and genocide, they weren't doing it to sharpen small "L" liberal thinking, they were actually advocating for fascism/racial hierarchies/genocide. By starting with the premise of "rationality" and then executing feats of rhetorical sleight of hand to hide the ball on what their actual first principles are, they were making extremely effective and appealing arguments for fascist thought and the dark enlightenment. It was chilling.
It was also 2011, Obama hadn't even wrapped his first term. I took my lesson, left the group, and blocked most of those garrulous libertarian types, who by the time I left the group, were less and less ironically advancing fascist thinking.
Then 2016 happened, and I realized I had witnessed one of the seed pods that would erupt into the alt-right. The corrosion was much worse than I thought, and I realized that those garrulous libertarian proto-fascists had spent the intervening five years planting anti-enlightenment thought into the fertile ground of right wing media and various other frustrated and "neglected" subgroups (think Gamergate).
What we face now is significant portions of the demos no longer buying into enlightenment values. Civil society had fallen asleep on its watch, and its ability to articulate why things like human rights matter, why democracy matters, why institutions matter, had almost completely atrophied. In my view, that's why so much of the early Resistance was less cogent argument and more folks who in their guts still believed in enlightenment society being unable to articulate why, and coming up with catchy slogans as substitutes for a robust defense of small "L" liberal society.
All of which is to say, one of the issues I have with strongly advocating for rationalist debate (which I agree, can be very good!) is that "rationalist" debates is one of the most effective mind-viruses the proto-fascists I discussed earlier use to inject their corrosive beliefs into the minds of unsuspecting targets. "Rationalist" positions like "what, why can't I just talk about The Bell Curve" and "despite making up only 14% of the population..." are, much, much more often than not, just starting points for very effective rhetoric that convinces people that our (still, barely) enlightenment society is bad and should be scrapped. I see that corrosion as an existential threat to the existence of liberal democracy as we know it.
The solution I currently apply has two parts. Firstly, I only engage in "rationalist" debates with those I am reasonably sure are arguing form a place of good faith. Good faith to me is both a willingness to be rational and change one's mind in the face of superior evidence, but it is also not actually questioning the inherent moral equality of all people, and the accompanying believes that make liberal, enlightened thought possible.
Second, I really just bought into existentialism, and I now just accept certain values as articles of faith. I don't debate certain first principles. I hold some truths to be self-evident and thus in no need of strictly rational defense.
All of which is to say, in our current struggle with a much more corroded base of support for small "L" liberal, enlightened society, in which we are actively engaged with a proto-fascist mind-virus that has infected many people, I don't see the value in wide open, un-moderated platforms. I also am very, very suspicious of people who are attracted to "edgy" topics, because I don't know if they are arguing from a position of good faith, or if they're a fascist looking to infect more minds.
Anyways, I think, on the whole, there are some very good sources of good faith rationalist discussion of "edgy" topics. Pretty much anything Tyler Cowen puts out is excellent in this regard, and I think on the whole, Slate Star Codex was more good faith than not.
Great post! Really got me thinking.
So my perspective on this is interesting because I run a moderately popular philosophy discussion group / drinking club thing (or at least I did pre-pandemic) here in Seattle that attracts a lot of SSC-style rationalists, and my experience has been pretty mixed.
Most notably, on at least three occasions that I can recall we've had to expel rationalists who turned out to be deeply into really evil dark enlightenment shit and who were using the group to try to recruit, as well as harassing people and just generally being assholes.
That puts rationalists in the running for the title of most-frequently-expelled-ideological-subculture, alongside Jordan Peterson fans, Hindutva cranks, and various flavors of anarcho-capitalists and conspiracy theorists. And we take a pretty relaxed approach to moderation, since heated disagreement is part of the point of the group, so to get kicked out you really need to be doing stuff like blowing up at people, making threats, engaging in sexually harassment, or advocating for truly heinous points of view.
Now don't get me wrong, we get some rationalist who are great too. In fact, in one of the cases where we had to kick someone out it was actually another rationalist who clued me in that the guy was bad news, because he recognized some of the subcultural DE jargon he was using.
But the modal rationalist who shows up to our group is just a slightly odd young man who works in tech and believes strange, very religious-sounding, things about artificial intelligence. This being Seattle, we've got a number of regulars who actually do AI stuff for a living, and it can sometimes feel like they're stuck running some sort of weird de-radicalization center for people who have spent too much time on the LessWrong forums. I'm only half joking! I have met 19 year olds who appeared to be experiencing genuine anxiety over Roko's Basilisk.
So, I dunno. I've read a fair bit of SSC and found it sometimes interesting and sometimes kind of daft. I have mixed feelings about the larger subculture. I like the effective altruism and the bayesian rationality parts of it, but I think it has some really weird corners that don't seem terribly healthy, along with a small but genuinely dark underbelly where it intersects with the scary neo-reactionary stuff.