“you can only explain the influence… with regard to the fact that the technophile wing of the Trump administration believes the singularity is imminent.”
So they’ve invented their own fantasy theological Rapture, but without the tedious parts of Christianity like charity, empathy, and a belief in the sanctity of all human beings.
Apparently Christianity is making a big comeback in Silicon Valley. Could be a combo of a rightward drift and almost religious like reverence for the “coming singularity”
My observation is the opposite - the Zvi’s and Hintons of the world are observing the exponential rate of progress in AI in a very scientific and data driven way, while the AI pessimists like Marcus have an almost religious view of the human brain as being divinely touched, and thus continually make unfalsifiable claims that LLMs can’t possibly be capable of logic and reasoning despite overwhelming evidence to the contrary.
I don't think it's defensible to SURE either that current LLMs are a step on the road to AGI *OR* that human cognition is special in some kind of way and doesn't work along similar processes, at an exponentially more advanced level. Believing either with certainty can easily be classified as "religious belief;" they both rely on faith to draw a conclusion that is beyond where evidence can take us.
It's hard thing for people to imagine in the age of the internet and Trumpian politics but the best thing to do is to keep an open mind...
My theory: too much drama in the polycules, it was like if you dated a crazy person in your youth, fun for a while but then it gets old and you want just get married and be a normie.
We really need a word other than Christianity for "the kind of pseudo-Christianity popular among the MAGA that has piss-all to do with the actual teachings of Jesus."
Xianity?
Then I could ask questions like, "When you say Christianity is surging in Silicon Valley, do you mean Christianity or Xianity?"
I think this is just impossible without some real weird synthesis. The tech people I've met are so Nietzschean, their entire ethos cannot be a part of the Christian tradition. It feels absurd to say this but sincerely believe that Constantine I or Charles V had more Christian compunction and restraint than any of these tech people. They're just creatures of will.
They think they're making God on their computers!
And maybe they are but that's just simply not Christian.
I'll say just as sort of "warning" as someone a bit older; these "trend" pieces about how a particular segment of society or how America is turning in this new direction are very old genre of newspaper writing...and often turn out to be bunk.
Basically, a reporter wants to try to capture the "zeitgest" and write a piece interviewing a few leaders within a movement and then use this a jumping off point to say this is this new big trend in an area or in the country.
Good example is back in 2010 there were multiple trend pieces about how Libertarianism was the new rising force in GOP and America because of the Tea Party. New York Times Magazine had big profile interviewing Ramesh Ponnaru and others about how there was this big libertarian moment happening (the thing I actually remember most is that the reporter in question made sure to mention they dined on Filet Mignon for their conversation...as though he needed to do a flex about his expense account or something). Turns out Thomas Massie was basically correct https://www.washingtonexaminer.com/opinion/1881868/rep-massies-theory-voters-who-voted-for-libertarians-and-then-trump-were-always-just-seeking-the-craziest-son-of-a-bitch-in-the-race/
So yeah, be vary vary weary of articles that take interviews of handful of supposed "thought leaders" and claim a new trend is happening. Will bet money when Pew numbers come out it will show that while Harris probably ran behind Biden in Silicon Valley this time around, she still got still overwhelmingly got most of the vote and that in 2028 the Dem nominee will get a vote share close to Biden's.
I lived right by Dimes square for a year! All pieces written about it massively overstate the weird underground groups that run through it. Most people are just there for the $17 glass of natural wine.
Funny that. Don't live near Dimes Square, but definitely have hung out at bars and restaurants in the area many times. So like you, I had personal experience reasons to know this article was nonsense.
Like this Silicon Valley stuff. What's way way more likely "explanation" for super high profile very rich attaching to themselves to Trump is that throughout history, super rich people have tried to cozy up to power to their own ends and that Silicon Valley being a more mature industry now then it was even 10 years ago, super rich people are just now acting like they always have.
I wouldn’t call these stories “bunk”. I would say that they report on real microtrends, but often give the impression that they are describing a whole culture, rather than one interesting and weird subculture. It’s like writing about 15th century Florence and saying everyone was a rich banker or an artist, forgetting that well over 90% of the population was neither of these things.
This discussion sort of reminds me that this is all another example of the biases that happen with media overly concentrated in NYC and tech overly concentrated in SF metro area. These are both areas famous for being open to different types of cultures, ideas, subgroups etc*. So there are all sorts of microtrends, subgroups in both areas. And all it takes is the right reporter living in the right neighborhood (or hanging out with a particular group of people) to over extrapolate a very real trend attracting increasing numbers of people and claim this thing is a much much bigger trend then it really is**
* I can't be the first person to note that of the many reasons as to why Silicon Valley became "Silicon Valley", the fact that since 50s/60s San Francisco has become a haven for alternative culture has got to be one of them; a disproportionate number of people who are clustered on the high end of the "openness to something new" scale has to be part of the story here.
** Always think with these trend pieces that if you can insert the quote "Anyone who's anyone is doing XX" then you should immediately treat these trend pieces with a skeptical eye.
"I'll say just as sort of 'warning' as someone a bit older; these 'trend' pieces about how a particular segment of society or how America is turning in this new direction are very old genre of newspaper writing...and often turn out to be bunk."
And of course, this is extremely unsurprising, because not only is this kind of trend analysis rarely the least bit empirical, it takes wild swings based on tiny changes in data.
There is not really much of a meaningful difference between the temperaments of a country where Kamala Harris wins 49-47 and one where she loses 49-47, but we've all decided we have to craft sweeping narratives about how now Americans love Trumpism and we're in a new right wing golden age and blah blah blah. Just like 17 years ago we all said everything was progressive now, and the fundies had been defeated, and blah blah blah, because the Republicans had one bad election cycle.
When you drop the phenomenon from overall politics to even more nebulous questions like "how many computer coders are down with the Jesus" you are sure to get some scattershot takes.
Could not agree more. The amount of over extrapolating from one election result is nuts. Especially as you say the shift in votes is quite small.
I do think there is value when the shift in vote is a) quite large and b) seen over multiple election cycles. For that reason, I do buy that there is a trend with Hispanic background voters. The shift in 2024 was quite massive and seen over multiple election cycles*.
But yeah, honest to god, given how badly incumbent parties did throughout the developed world, it's really hard for me to escape the "it's the inflation stupid" angle to the election results. If anything, Trump's repulsiveness is probably why the margin wasn't much higher in Trump's favor judging on how elections went in other countries.
* one reason I liked Kevin Drum (RIP) was his commitment to backing his posts with data. Or more accurately looking at conventional wisdom and seeing how often the data didn't actually back up the CW. And he had a very good post showing that for all the pontificating how maybe Silicon Valley shifted right or black voters or young voters, the story really is about Hispanic voters. https://jabberwocking.com/kamala-harris-bombed-with-hispanic-voters-thats-the-whole-story/
I feel like it's happened multiple times where I've read a trend piece in the NYT, and later found out (through googling or scuttlebutt or whatever) that all the people profiled in the piece were in the same social circle as the author (went to school together, or friends of friends, or what have you). What a way to make a living: hear an interesting tidbit about some friends and then turn it into national news.
i think this is putting a slightly conspiratorial spin on the common phenomenon of writing about what you know. when it is somebody like noah smith doing it, then i know how many grains of salt to take that kind of article, with. when it is an unfamiliar byline in the times, maybe less so.
"Christianity is making a comeback in SV" - any citations here? Because I haven't seen it so far (looking from the inside of a large SV firm). What I mostly see is people scrambling to keep their jobs.
The way some (definitely not all) people in AI approach the topic really does remind me of religious fanatics treating the rapture.
Like a group of evangelicals who denounce "the sin of empathy," many of the the "Longtermists" in particular seemed to have stripped effective altruism of all it's altruism towards their fellow man and replaced it with vague and untestable notions about AI.
I don't know where this distain for longtermists comes from. While I wouldn't identify as one I am sympathetic to the idea that we should *in addition* to caring about the current population care about people in the far future. Elon Musk is most definitely not a longtermist as he is destroying important institutions, something longtermists think about and value a lot.
"I don't know where this distain for longtermists comes from."
If this is just a way of saying, "I wish people didn't immediately associate longtermism with Sam Bankman-Fried," then I sympathize -- I wish that too.
But if you are really saying, "I don't have any hypotheses about why people might think badly of longtermism," then I'd suggest you google "Sam Bankman-Fried".
It's unfair, I agree, but it's not the least bit mysterious.
Setting aside SBF, the longtermist project is inherently unfalsifiable in general, and longtermists tend to be overconfident in spite of that fact. If we do realize the default outcome where AI is limited (by energy?) and the population decline continues, it will seem obvious that a bunch of ethicists got nerdsniped by abstract moral questions into ignoring anything with actual moral relevance.
Well, all moral theories are unfalsifiable by the is-ought distinction. If we assume that you are right about AI, even then I would think that it is a good idea for philosophers to think about how to build more resilient institutions and more stable geopolitics. Lastly, longtermists (in the EA tradition) at very high rates donate 10% of their income to charities that improve the lives of people today - that is to say they very clearly care about things of moral relavence today.
Your second point is well taken. But if you’re going to base your morality on claims about the nature of the far future then it does, in fact, matter how accurate your predictions are in a way that isn’t relevant for moral systems that revolve around sun worship.
The only claim about the far future that longtermists base anything on is the claim that there could well be far more people in the future than in the present. They definitely don’t base things on the claim that there *will* be far more people, because one of their major causes is preventing human extinction.
I'm not talking about disdain for the far future. I think that it is worth considering, including AI risk, even if many of the claims are brought up in untestable and mystical terms. I'm talking about people like Elon who claims that he's on a mission to help mankind's long-term survival and that that mission is served by taking aid from needy people today.
Maybe he started off justifying it to himself as a necessary tradeoff between general welfare today and long term survival. But if you read his correspondence with government agencies over the last two months it is ABUNDANTLY clear he is enjoying the cruelty of it, and thrilled to be taking out hordes of "NPC"s.
Yes, as I agree that Elon would rather let the world perish than not be crowned the hero, even if he did not help or even is the one destroying the world.
As the old Greek proverb goes, "a society grows great when old men plant trees whose shade they know they shall never sit in, but first they have to burn down every single tree currently in existence to make things nice and clean for the new trees."
I wonder if it’s partly a media effect where the AI risk stuff just attracts more eyeballs and so gets more play while all the EA effort aimed at current issues is considered too boring for a mainstream readership. It also just seems like a cooler thing to be involved in that trying to make incremental progress in animal welfare.
Well that, and the convergence several years ago between the initial England and Oxford-based EA cohort with the zesty San Francisco tech set, who filled their impressionable English minds with blabber about transhumanism through AGI and colonizing distant galaxies. Probably also why the only rational, simple, and straightforward approach to AI risk - a research moratorium - wasn't even seriously considered.
I think the issue is that while lots of people talk about the importance of the wellbeing of future generations, most of them only do so in the context of environmentalists yelling at people for consuming too much. Yelling at people for their consumption habits is a standard part of politics that people understand. What longtermists do is go beyond yelling and instead actually think about how to benefit future people. That feels weird to many people. It's not as fun as yelling at people who use too much plastic.
I mean, the case that it isn’t in fact a fantasy is actually rather strong. AGI / ASI is fundamentally different and frankly more dangerous technology than anything that has come before, the consensus opinion among the most informed persons in the field is that we’re 2-5 years away (probably on the lower end) for AGI. The alignment problem is not solved (if even soluble) and that essentially means human extinction by default as a result of the creation of a species smarter and more capable than humans whose goals benefit from the consumption of resources that humans also benefit from (and that capitalist incentives dictate be given as much power and control as possible because why would you ever want a less-capable human who needs to sleep and who sometimes gets sick in charge of things instead of a more-capable AGI? It’d be a breach of fiduciary duty to allow it.)
There really is a sense in which relative to AGI / ASI every other issue is rearranging deck chairs on the Titanic and *at a minimum* it behooves people to take that prospect seriously if only as a matter of expected value (probability * magnitude of harm) even if for some reason they believe contrary to expert consensus that the probability itself is low. Blithe dismissal of the whole prospect seems like a commitment to willful ignorance.
Definitely not the *consensus* opinion! I work in academia on areas just adjacent to ML and AI. Some people do remain skeptical of AGI, but many of us worry a lot. (A few are giddily optimistic.)
I’m no expert, but my outside view so far has been terribly confusing.
What do people in your community make of the following combination of facts?
1. AI will often solve objectively hard math problems (e.g., “find the homotopy group of this weird 9-dimensional manifold”)
2. It often makes *atrocious* mistakes on simple problems. (Just yesterday I was yelling at it for several consecutive messages because it couldn’t take the first-order condition in an extremely simple Hamilton-Jacobi-Bellman equation. It was extremely resistant to correction.)
I find that on anything even remotely open-ended, it generally performs poorly.
For a quick response, I would say that simply the fact that we're having a conversation about to what extent AI is good at graduate level math should undermine the confidence in any prediction that "true" AGI is impossible. Turing test. Self-driving cars. Medical diagnostics. Humor. Etc. So many advances in a very short time.
Beyond that, what gives me pause is that in this field, the people with more expertise tend to be *more* concerned about truly dramatic evolution than those with less expertise.
This is what people call the “jagged frontier” of artificial intelligence. It’s just a far more extreme version of the phenomenon of the absent-minded professor, or the fact that dogs can be so good at understanding and manipulating people but still can’t learn that thunder won’t hurt them.
I don’t believe there is a single thing called “general intelligence” - there are many different kinds of intelligence, and any sufficiently alien being that has a lot of them is likely to have a very different mix than you, and thus will look eerily amazing at some things and dumb as a rock at others.
As one can see by observing the career of Donald Trump, you don’t have to be good at all kinds of intelligence to be dangerous.
This is interesting -- I wasn't aware that this had a name.
But going back to the original point way up in the thread, I feel like I'm frequently told AI researchers/Polymarket/whatever are in total consensus that "AGI" is coming soon. It seems like you're saying the term isn't even well-defined (which I agree with).
What is the “it” in question? Are you using the newest reasoning models (o3, sonnet 3.7, etc)? The progress really is astonishing, albeit not always mistake free (just like, you know, human experts)
Yes -- both Sonnet and o3 have proved utterly idiotic on a range of problems. I'd say that I see idiocy far more often than I see brilliance from these models.
My impression is that there are specialized agents capable of super-human performance in narrowly-defined domains. And that's great until you step beyond the bounds of that domain.
But "How easy is it to expand the bounds of these domains?" and "How easy is it to create new agents with super-human performance in new domains?" are not obviously answered. To the extent that answers are forthcoming, for both questions it seems to be, "Not very easy".
The recent history of large language models shows that it’s actually surprisingly easy to expand the bounds of these domains, though hard to control or predict which particular directions these bounds will expand. Transformers were designed to help with translation, and then they discovered they could do sentence completion, and then when the tried to teach it to do sentence completion it was suddenly able to translate more languages, and write computer code, and answer questions. Trying to get it to answer questions more accurately led to improvements in mathematical ability and human manipulation skills, and new capabilities keep appearing, but not necessarily the ones people are aiming towards.
Oh, 1 was kind of a joke, sorry. I just meant o3's ability to solve frontier math problems (and the ones I've seen looked about as ridiculous as my joke example).
I was thinking less Vinge and more Hinton / Bengio / Aschenbrenner / Altman / Amodei.
As to Marcus, I honestly don't understand his point of view. The fact that he is willing to make falsifiable predictions like this is epistemically virtuous and admirable (https://x.com/GaryMarcus/status/1873766399618785646) but also other than 9 (and I guess 1 kinda, to the extent it's compute / context-length bound) these all seem like things that are basically within the capacities of current models with sufficient scaffolding, let alone by the end of 2027. How is (2) not just "longer context window version of thing that SoTA models are obviously very good at."?
I disagree completely. Hallucinations are still a huge problem. AI is good at code assist, but it's not good at end-to-end process without direct human expertise. What "Oscar-caliber screen plays" or "Pulitzer-caliber books" has an AI written? I'm floored you think current models can do these things.
As they exist today, they're tools that humans can use for some tasks. But they still make lots of shit up that you won't notice if you aren't an expert, and while they're good at mimicking the form of great writing, they're still quite bad.
"What "Oscar-caliber screen plays" or "Pulitzer-caliber books" has an AI written?"
Those actually seem like very easily achievable goals for generative AI even as it stands. 90% of new screenplays written are just reassembling parts from previously written fiction -- how many versions of Jane Austen novels have been filmed? Do we have any evidence that Tarantino is not just filming AI-synthesized remixes of previous films? True, if we set our AIs to generating screen plays, then most of them will be bad. But most screen plays written by humans are bad. And if the AI generates thousands or millions of them, then some of them will look like good ones.
So I think those are pretty low bars.
But how about the fact that I recently asked Google's AI to give me a five-letter word containing B, F, and G, and it came back and told me that "buggy" contains the letters B, F, and G. That's not just stupid, it is completely indifferent to the truth. It simply dgaf about whether what it spits out is true or not.
Oh, that's just because LLMs don't "think" in letters, they "think" in tokens. If you think this is an unknown fundamental limitation, that's incorrect.
Even so, just ask it to write some Python code to find your word and it will.
Pretty much everybody who either actually has to use the darn stuff in an actual business environment or is studying it, and doesn’t think they are going to become a billionaire doing it.
Terrific and useful tools are here and getting better. AGI, not so much.
Gary Marcus is deeply committed to a Chomskyan paradigm that I think has been falsified about as effectively as any paradigm can be. I think it’s more useful to look at the critiques that Hubert Dreyfus was giving of AI in the 1970s and 1980s, which he thought were critiques of the possibility of AI at all, but turned out to be critiques of the Chomskyan symbolic paradigm, with neural nets getting things much better.
I would describe Marcus’ critiques as more on what LLMs are currently capable of, if there are better paths, and what those paths are likely to be. He is IMHO not arguing AGI is impossible, but rather criticizing the current approach as a dead end. I personally think his critique differs from Chomsky.
4o has that new image generation feature. I uploaded a picture of myself to mess around with and 4o told me I look "fantastic" and complimented my Betsey Johnson boot. So I think AI is great.
Just looked at the `Marcus on AI' substack. First impression is that it reminds me of the string theory critics that became prominent a decade or so ago. (Him quoting Hossenfelder helped).
You can tell that AGI is bullshit by the sheer amount of evangelism (often likely paid evangelism) on the subject, including disgustingly often by Matt and people like Nate Silver. If it were really transformative and really 2-5 years away this would just be obvious and not something people would need to be "sold" on like cryptocurrency, another Silicon Valley scam that most of the same people are falling for.
I think “general intelligence” is a red herring. The long history of IQ should help make this clear. But even if there is no single thing that is intelligence, it should be clear that any person or being that has lots of some sorts of intelligence is worth paying attention to, no matter how dumb it is in other ways. Just look at Elon Musk for an illustration of that.
I define intelligence as the ability to take in information and process it in ways that enable effective action towards some goal. There are a lot of types of information that humans are able to process effectively that worms aren’t. But there are some forms of information that worms can process that humans can’t. There are a lot of types of goals that humans can’t act effectively towards that worms can’t. But there are some goals that worms can work effectively towards that humans can’t. In this particular case, the advantages of one are so much broader and bigger than the advantages for the other that on basically any reasonable way of attempting to collapse this into a linear scale, the humans come out on top. But that doesn’t mean that the scale actually is linear, or that every comparison is clear cut.
It’s easy to see that Walmart is a bigger business than Peet’s Coffee, even though there are some measures on which Peet’s might win. But I think if you try to compare Walmart to Apple or Saudi Aramco or Tesla, it becomes a lot harder - those three all have much higher market capitalization than Walmart, but Walmart has a lot more employees, and more physical presence in more of the world, and a lot more customers. Walmart is clearly smaller than Amazon, but Amazon also isn’t clearly bigger than these others either. There’s a lot of incomparability of size of corporations, and I think there’s also a lot of incomparability of intelligence, even if some comparisons are clear.
What is the difference between “simulation of intelligence” and “intelligence”? I think that believing there is an important difference here is what leads so many smart people to feel imposter syndrome, where they think everyone else around them is actually intelligent while they are just faking it. It turns out that faking it effectively is all there is to intelligence.
Difference between sapience and sentience, right? I have not followed these debates closely, and this has probably been addressed, but you can easily write a program that protects its own existence by preventing you from deleting it. But we wouldn't say that the program fears death, or experiences pain upon deletion, even though it's superficially behaving the same way an animal would.
I think of “sapience” as the ability to manipulate cognitive information, and “sentience” as the ability to have awareness of one’s environment (and perhaps of oneself as part of the environment). When Bentham says “it’s not can they think, but can they suffer”, he is getting at this distinction.
But I don’t think this is quite the same as the claimed distinction between simulated intelligence and real intelligence. Turing argues (flat-footedly) for the behaviorist kind of idea I suggested - if you simulate intelligence well enough, that just is intelligence.
I think that the program that acts to prevent deletion becomes more and more like a being that has fear of death as its actions to prevent deletion become more and more potentially widespread and long-ranging. People don’t translate every awareness of risk into action to mitigate that risk, or even trade that risk off against other goals - but we do a lot more thinking about long-term risks (like cancer and fascism) than many other animals do.
"...it behooves people to take that prospect seriously if only as a matter of expected value (probability * magnitude of harm) even if for some reason they believe contrary to expert consensus that the probability itself is low."
DT, I normally like your posts, but Invoking Pascal’s wager in a world where Claude, Deepseek, 4o/ChatGpt, Gemini, and DeepSeek exist and are making astounding strides far beyond what anyone thought possible four years ago is like invoking Pascal’s wager in a world where Jesus is turning water into wine right in front of you.
Right, and the appropriate response to seeing someone turn a clear liquid into a red liquid in front of you is not to say “this man is clearly the son of the Jewish God, who exists and created the world in seven days a couple thousand years ago”. Even if a bunch of people who hang out with this man all think so.
Right, and if it turns out to actually be possible to transmute water to wine, that’s impressive and gives the guy credibility but I’m still skeptical about the 7,000 year old Earth thing, to strain the metaphor.
Text prediction is not general intelligence, and it certainly isn’t superintelligence. It makes them likelier, sure, but (for example) I’d still wager on humanity being destroyed by politics rather than a rogue AI.
You wrote, "even if for some reason they believe...the probability itself is low."
So, you were asking, "how should someone proceed if they think that the probability is low and they think the magnitude is extremely high?"
I then commented that I recognized this as the set-up for Pascal's wager: what should a maximizer of expected value do if they believe there is a very low probability of a very bad outcome.
I did not say that I believe the probability is low; that was the hypothetical that you introduced.
You then rejoined the conversation to assure me that you do not think the probability is low. Okay, I'll keep that in mind.
I took your point regarding Pascal's Wager to be dismissing the entire prospect of this being relevant (consistent with your top level comment). It's usually invoked in the context of implying that a given probability is negligible such that, being incapable of being distinguished from background noise, it should be ignored because the set of elaborate circumstances required for the alleged harm to come about doesn't deserve to be privileged over various other (likely incompatible) alternative prospects. But this isn't what we have here: just as Jesus turning water into wine suggests that all of a sudden the correct interpretation of Christian theology *is* privileged within probability space, progress in AI and expert opinion regarding AGI imminence is far above background evidence that this is a threat to be taken seriously in view of the obvious harms (and *demonstrable examples* of the kind of alignment failures predicted by people concerned with AI alignment) implied thereby.
Pascal's Wager isn't actually a moot proposition because it's conceptually wrong to care about expected utility, it's a a moot proposition because the set of "things that result in infinite utility loss" is basically infinite. Believing in God / Yahweh / Adonai / El as the one true God is incompatible with various other religious traditions (actual or hypothetical) and the various testimonies thereof don't necessarily move the needle far if all of your other experience is incompatible with the supernatural. But if Jesus shows up in front of you and turns water to wine, that should in fact affect your priors about which possible religious worlds to pay attention to.
The "G" means "general," right? AI has evolved spectacularly in just a few years. But how would those specific awesome capabilities ever become "general"? I'm not even sure what that would mean. Would Gemini be able to solve the hardest math problems *and* be an excellent therapist *and* interpret radiology scans *and* process legal documents? Or more simply, would we ever have a single entity that is great at both chess and Go?
Plus the thousands of others things a typical human can do, even if not as well in each case as a single top of the line AI program.
So I still don't know what the "general" in AGI means.
As someone who is quite worried about this the appeal to labs timelines + talk about loss of control is anti-convincing to smart non-experts. The misuse risks are concrete, imo more scary, and will arise sooner!
I appreciate the intervention although I have to disagree re: more scary. Misuse is a concrete risk, and should be taken very seriously, but it’s fundamentally different in kind and *probably* not literally an X-risk. At least no one stands to make money off engineered plagues (unless you load up on puts, I guess, but that seems like a worse strategy than more productive forms of model exploitation.)
Exactly this. Even if [probability] is low, [magnitude of harm] is extremely large.
And personally, I do not think that the probability is that low. It depends on what you mean by AGI, of course. Perhaps what will emerge will remain less capable than humans in certain regards. But it will obviously be much more capable in others.
I always find it strange when folks day this and don't seem to realize it's an argument for banning AI research and an extensive...well, war against any nation which refuses to do so. Like, if you take Sam Altman seriously, it sort of leads to the conclusion that he should be in jail. At best.
Uh, there’s a reason that most of the people at these labs have called for government restrictions on the research, and a good number of them did call for a ban. But just like with nuclear weapons research, the people who think stopping all of it would be a good thing also think that the worst thing would be allowing it to proceed unfettered in lawless places while stopping all the law-governed research.
Wait, who doesn't realize it's an argument for these things? In broad strokes I would agree with that (there are a lot of epicycles), although I think international coordination is obviously preferable to war.
There are several steps between the creation of a digital process that can do anything a person can and a radical transformation of society. They need to be massively rolled out at a favorable price without being limited by something like energy or training data. ASI is riskier because there are unknowns around its capabilities but human geniuses certainly haven’t been more dangerous than idiots, historically.
Price and energy seem like currently-solved problems - relative to the cost of a human to perform tasks within present model capabilities the price of tokens / query responses is indistinguishable from zero.
I'm generally not sanguine on training data limitations being meaningful simply because it seems like no one's training on realtime visual-motor-sensory input (which is de facto unlimited) all that hard nowadays (presumably Boston Dynamics and I guess Tesla / Waymo would be at the forefront here in terms of people with use cases for that), although it seems that NVidia is trying to make a very efficient world-physics simulation to aid in robotics development that I have no ex ante reason not to expect to work.
I would guess not alg topology proofs but I also think that alg topology proofs may just be a "throw enough scaffolding and compute and tries at it" situation (although I could be persuaded otherwise).
Unfortunately I also think that "AI has limited Type-2 reasoning capabilities in certain deep domains" doesn't really reflect a substantive mitigation of risk. You don't need to understand parabolas or algebra to be really good at throwing a ball accurately, and I think that "sufficiently good statistical correlation plus the implicit abstraction afforded by large numbers of hidden layers" is likely to be more than sufficient to pose all real-world relevant dangers of machine cognition. Ed.: particularly in view of all the realized examples of misalignment risk being shown in current model research.
Compute in what form? Certainly running current models for longer is completely hopeless here, that’s what I’m asking. Ofc I agree that alg top is completely useless but biology skills aren’t, and I suspect there’s a similar question there
I mean deployment is going to be limited by something eventually. Price and energy seem cheap now because they’re only using less than 1% of total energy and have limited applications. And just because nobody’s training on real world doesn’t mean it will happen quickly when they do for all problems. It’s easy to imagine a world with self-driving and -flying drones but very few things that interact with people socially, for example.
They even reinvented Hell from first principles: "Roko's Basilisk" says that any person who doesn't help the future godlike AI come into being will be resurrected as a simulation and tortured for all eternity.
It's weird that we're having this discussion. Even if AGI does not arrive, LLMs are going to upend society in profound ways. Like... literally this last week, OpenAI probably just ended most of photography as a profession. Yes, art will still exist, and we'll still need some photographers to cover events, but play around with ChatGPT Plus's image generation for one hour, and then try to convince me, with a straight face, that anyone will ever pay for stock photography again. And that was a random Tuesday at OpenAI.
We are already staring at the death of many professions (pretty much any profession that involves data entry, munging/transforming of data, is already dead, even if the realization has not hit yet), and the LLMs that we have right now are much worse than anything we will have in a year.
We should be freaking out about this. It's fine to argue about which actions make sense, but it is silly to dismiss this as some crankpot religious belief.
If you think that a crazy cult forming around an idea discredits that idea, then all ideas are discredited.
Seriously, imagine if people treated a more mainstream idea like this. Imagine if, upon being told that a politician was a Christian that they'd been discredited by David Koresh. Imagine hearing that a politician admires the democratic ideals of the Founding Fathers and arguing that Robespierre discredited them.
Basically, imagine you are a blacksmith during the invention of the automobile. Demand is about to fall off real quick for horseshoes. Now, imagine that almost all of us are blacksmiths.
It's the idea that when we invent AIs that are smarter than humans, they will be able to solve so many problems and do so many amazing things that whatever society is like afterward will basically be unimaginable. For this reason, making sure humans will be able to control and get along with such entities is vastly more important than all other problems.
Why is it an "if?" Technological progress in computer science has been moving along pretty steadily. Even if it slows down, someone will do it eventually, unless you think it's literally impossible. Do you think that there is some law of the universe that prevents anything from being smarter than Einstein?
Seriously, we have a good 5 billion years till the sun explodes. You think no one in all that time will figure it out?
I think that one way the current political discussion is impoverished is that few political writers really appreciate the dynamics of faith as a mode of thought in political action.
What is fantasy about it? You can certainly disagree that superhuman AI is imminent, I do. But do you deny that when it is invented, it will vastly change society and be extremely important?
I suppose the strongest argument against the sort of extreme singularity scenario people like Musk pitch is that maybe there won't be a strong advantage to whoever comes up with superhuman AI first. Maybe it will be possible for other people to copy it fast enough that whoever owns the first one won't be vastly more powerful than everyone else forever. That seems plausible, but it still means that it is likely that superhuman AI will significantly change the social landscape.
As an elderly US biomedical researcher, I completely agree with the core of your essay. Some "indirect costs" are probably inflated, and DEI initiatives in recent years may have distorted some elements of America's research program, but it's hard to understand the overall thrust of the current administration's cuts to science except as visceral hostility towards one of the most important contributors to health and wealth. A particularly egregious current example is the decision to eliminate NIST's Division of Atomic Spectroscopy. https://www.nist.gov/pml/quantum-measurement/atomic-spectroscopy
Perhaps one element of this seemingly mad agenda of cuts is a reaction to an unwise collective expression of partisan Democratic views among US scientists?
>Perhaps one element of this seemingly mad agenda of cuts is a reaction to an unwise collective expression of partisan Democratic views among US scientists?
I think there is a very important distinction between scientists voting for Democrats and scientists who claim to speak for the scientific community making nonscientific claims and getting caught manipulating papers in the name of social justice.
Scientists tend to vote Democratic because Republicans tend to be anti-science. When your university has to build separate facilities to perform stem cell research because the Republicans pass legislation banning federal funding for it and enforces it down to overhead costs paying for electricity that powers lights that illuminate labs that are doing otherwise state-funded stem cell research... well, it's not likely to encourage you to vote Republican and there are functionally only two parties.
Partisan opposition to a political party that opposes your profession is defensible. Claiming that the efficacy of masking and social distancing in church is different than an anti-racism protest is an open invitation for the soup-to-nuts politicization of science. Science can be non-partisan even if scientists are not, as long as it remains objective and internally consistent. But a handful of selfish idiots have ruined it for all of us by mugging for likes of social media.
The two professions who were hurt the most by social media incentivizing them to act like performative fools in public were academia (including science) and journalism.
I'm a Democrat, but I'm not convinced that Democrats are pro-science and Republicans are anti-science (I'm not even sure what those terms mean). A variety of opinions wildly at odds with contemporary scientific thought are held preferentially by people who identify as Republicans (e.g. disbelief in evolution; rejection of CO2-based climate change), but others are held predominantly by Democrats (e.g., GMO's will kill us; sex has no biological basis). I'm inclined to accept your explanation for why scientists tend to vote Democratic, but I suspect cultural factors are more salient. For example, scientists are highly-educated and the contemporary Democratic party is culturally more attractive to well-educated Americans. I completely agree with your last paragraph.
The Republican anti-science views you cite are much closer to the center of the Republican party than the (more fringe) "Democratic" anti-science views are to the center of Democratic party.
Would the Democratic party ever have put RFK Jr as Secretary of HHS?
They made a trans woman, Rachel Levine, the Admiral of the United States Public Health Service Commissioned Corps, where Levine put effort into quashing and binning studies which demonstrated negative outcomes from trans healthcare, and pressured WPATH to remove age guidelines for trans care from their standards of care.
Ah, an *assistant* secretary of HHS. Yes, yes, I see it now. Both sides are completely equal in their guilt. And thanks for pointing out that she's trans because that's the important thing.
It is important because Levine is obviously ideologically self interested. If an evangelical was quashing research into stem cells, you would think it was very relevant to note their personal details.
Certainly not. Mr. Trump has seemingly gone out of his way to select senior officials who seem outrageous (though in many cases their reality is less-so). You may be correct about fringe vs center, but perhaps this simply reflects the imbalance of education between Republicans and Democrats.
On the GMO front, that was very specifically the RFK Jr. part of the left. Things like Natural News have apparently been trending more rightwing in recent years. Meanwhile, USAID people were often pro-GMO.
The thing that gets me is that the incorrect Democratic beliefs you list are kind of insignificant compared to CO2 emissions. (And denying evolution can have some uniquely terrible societal effects, but so can believing in it, so. It's a level of harmfulness below CO2, anyway.)
Given that my liberal nature inherently trends towards self-doubt, I always wondered if I was missing something about politics - after all, their side feels the same about mine as I do about theirs, right? But climate change was a north star. "Oh yeah - if they can deny that then they are clearly the wrong side." And maybe it's not appropriate to generalize from one issue like that but we are talking about a subjective evaluation. We do our best.
Since I gave you some pushback... don't get me wrong. The anti-GMO shit pisses me off. It's like anti-nuclear. Our species came up with a god damn miracle to create once-unimaginable abundance, and you're mad about it and want to shut it down, when you can't even point to the supposed consequence.
And it's not just aesthetic. If activists actually had success in fighting back GMOs, it would reduce global food production, and lead to actual human suffering, just like the people who have had to continue living next to coal-burning power stations because of unfounded fears of nuclear meltdowns. It's a really bad position.
As for the other example, the belief is actually that "gender has biological basis;" it's kind of impossible to claim that sex has no biological basis. (Although I'm sure there are some particularly ignorant people who don't understand the terms in question who do.)
As it happens I think it's also pretty indefensible when you get down to it - gender seems, in a large majority of cases, downstream from sex. You're right that some gender studies massage the truth a little bit to support what is a moral belief (and one that I share) that trans people should be treated with respect. But saying gender has no biological basis is at least less silly than saying "there's no biological basis in what organ your gonads turn into."
I'm not going to argue that ignoring fossil fuels and climate is a problem, because it is. My personal opinion is that the left has exaggerated the urgency of the problem, in part for political reasons, just as the responsible right has understated the problem.
I think the biggest problem facing American civil society today is tribalism. Just my opinion.
I actually agree with you that there was a lot of exaggeration of the climate threat. The problem was not so much the original "Inconvenient Truth"-era projections of what uninterrupted emissions growth would lead to, so much as the refusal to notice when the measures we took vastly improved the worst-case scenario. Rather, it was continually insisted, and is to this day, that "nothing is being done."
As a result people are still thinking of a pretty manageable +2C world like it was a +10C world that actually ignoring the problem would have created. As my grandfather used to like to say about non-corporeal problems, "all it takes is money."
Agreed on the tribalism. I think it's kind of a reversion to a historical norm, and it was the LACK of tribalism in the postwar period that was notable. And I think it can be traced directly to near-ubiquitous military service in an existential war - it's hard to care if somebody has a different accent from you when they had your back on Iwo Jima. Obviously that is not something we can or would want to create in every generation.
Now if you want my takes to REALLY get hot, let me talk about how the genuinely-substantial health risks and annoyance of smoking were blown up into an automatic death sentence and the ultimate imposition as part of a scorched-earth effort to eliminate widespread smoking. 😉 (It worked; I imagine most people would say it was worth it.)
I think contemporary panic about climate change among Democratic Progressives represents a mixture of motives, ranging from scientific ignorance, to failure to adjust extrapolations, to a sense that for anything to happen we have to exaggerate, to a sincere desire to take down the capitalists, to Russian disinformation, to a quasi-religious environmental perspective. As with most attitudes, individual motives are generally complex.
Re tribalism, there surely must be ways to introduce/promote national service that don't require war.
Re smoking, I don't like the way the risks of e.g. passive exposure were exaggerated, but that reflects that I'm a scientist and dislike manipulation of facts. The decline in smoking in the US is very impressive; I couldn't say whether it's because of a cultural shift, health fears, or taxes.
The rich irony is that scientific funding actually increases when Republicans are in power. The GOP is not anti-scientific-funding... I mean, it didn't used to be... but it wanted to dictate what was and was not appropriate for scientists to investigate, which is antithetical to science.
I think the main drivers are the tendency for Republicans to be skeptical of how taxpayer money is spent (they constantly introduce legislation to insert themselves into the grant review process) and that part of their base has very strong moral objections to specific kinds of scientific inquiry.
Part of the Democrat's base is anti-science in that they vilify acronyms like GMO and think magnets cure cancer, but they seem to lack the will to enforce those priorities on research funding. They tend to direct their ire at the private sector and toss red meat to their base by hauling Monsanto executives in front of Congress.
Which party is pro or anti-science isn’t about the stupid beliefs members hold, it’s about whether their policies will lead to more or less scientific research will be performed.
I think it’s obvious which party is actually pro science.
Historically it's not obvious, though it's clear that Mr. Trump is trying to cut government support for basic science.
Many Democrats make assumptions about support for science that may be incorrect. US scientific research has prospered to the extent that it has succeeded in remaining nonpartisan, allowing it to win bipartisan support. I find the current environment remarkably unpleasant, but I was also irritated by the pervasive Woke climate that infiltrated scientific funding during Biden's term.
If the goal is to drive a wedge between Republicans and support for science, loudly screaming about how they hate science is a good way to do it.
It's hard to remember but our most recent non-Trump Republican President was Bush, and he poured craploads into stuff, a lot of it stuff I thought was silly even at the time like hydrogen, but you can see the outlines of what he was going for and a lot of people disagreed with me so it wasn't categorically stupid. We probably learned some interesting things even if the ultimate path of a hydrogen economy was a failure.
I don't think that is clear at all. It is just as likely that Trump is trying to control science and academia as part of the consolidation of power within the political movement that he leads.
maybe both? TBH with Mr. Trump I never know when he's doing something subtle and tricky, and when he's just throwing his plate of food against the wall.
When they're closing Social Security Administration offices and Hegseth is promising a cumulative 40% cut in DoD's budget it makes it hard to argue that the savage cuts are focused just on Democrats and Democrat-supporting constituencies.
These guys are just nihilists when it comes to the government, pure and simple.
"I think there is a very important distinction between scientists voting for Democrats and scientists who claim to speak for the scientific community making nonscientific claims and getting caught manipulating papers in the name of social justice."
I think that people who talk about extreme edge cases that they can't even provide any examples for are generally trying to cause confusion and mislead people about the basic reality of a situation. People think a lot of things.
What you are alleging DOES happen but is sufficiently rare - my God, especially in hard sciences - that citing it as a major issue in the context of chainsaw-deep cuts seems like intentional obfuscation.
The most salient example is the "lab leak hypothesis". Without any evidence whatsoever, many prominent scientists---specifically those with big public profiles who speak with authority---began claiming that there was no scientific basis supporting the hypothesis that covid 19 originated from a lab.
A very high-profile paper was published (https://doi.org/10.1038/s41591-020-0820-9) stating: "we do not believe that any type of laboratory-based scenario is plausible"
In that trove are quotes from the corresponding author of the definitive "proximal origins of covid" paper cited above such as “some of the features [of Covid-19] (potentially) look engineered” and that it was “inconsistent with expectations from evolutionary theory”. And in an email (obtained with a FOIA request): “I think the main thing still in my mind is that the lab escape version of this is so friggin’ likely to have happened because they were already doing this type of work and the molecular data is fully consistent with that scenario.”
The exact person expressing that level of uncertainty published a paper in Nature Medicine that reads: "The genomic features described here may explain in part the infectiousness and transmissibility of SARS-CoV-2 in humans. Although the evidence shows that SARS-CoV-2 is not a purposefully manipulated virus, it is currently impossible to prove or disprove the other theories of its origin described here. However, since we observed all notable SARS-CoV-2 features, including the optimized RBD and polybasic cleavage site, in related coronaviruses in nature, we do not believe that any type of laboratory-based scenario is plausible."
Granted, the initial impetus seemed not to be social justice, rather, to protect gain of function research, i.e., they did not want it to be blamed for covid. But it was quickly swept up in partisan politics and *many* scientists and physicians published similar papers and signed on to letters condemning the lab-leak hypothesis as racist. Nonetheless writing that "the evidence shows that SARS-CoV-2 is not a purposefully manipulated virus" in a prominent scientific journal after privately stating the exact opposite strikes me as a clear case of manipulating a paper.
Then there were the letters about how enforcing social distancing and masking rules on protestors amounted to "shutting down protests under the guise of health concerns." (https://www.cnn.com/2020/06/05/health/health-care-open-letter-protests-coronavirus-trnd/index.html) Fair enough, but the same people who railed against church-goers as recklessly spreading covid said: "However, as public health advocates, we do not condemn these gatherings as risky for COVID-19 transmission. We support them as vital to the national public health and to the threatened health specifically of Black people in the United States.... This should not be confused with a permissive stance on all gatherings, particularly protests against stay-home orders. Those actions not only oppose public health interventions, but are also rooted in white nationalism and run contrary to respect for Black lives."
I don't think this is an edge case or that I am cherry picking. I am a physical scientist and these clowns did not and do not speak for me or the scientific community writ large. Yet their invocation of science and claims of scientific evidence that did not exist *and that they directly refuted in private email conversations* are now being used as justification to destroy my career and profession.
Part of the sanewashing of early administration moves was to paint them as clever political positioning. Thus, going after USAID was smart because it forced Democrats to defend foreign aid which no one likes.
I think we can put the notion that these folks are clever strategists to bed. They're randomly raging through the entire federal government, destroying whatever their Sauron eye happens to land on next.
Whenever you think something the Trump admin is doing sounds like a good idea, you can be assured it will be implemented in the most incompetent and actively idiotic way imaginable. What seems to be happening is that one of the more learned members of his team suggests a good (or at least defensible) idea, and then it is implemented in a warped and twisted way.
What the US is doing is akin to a high earning man with a big mortgage deciding to rip out the pipes from his walls to sell for scrap just to make a little more cash. It's unfathomably stupid.
The two things from Trump administrations that I have thought of as good ideas were capping the mortgage interest tax deduction, and ending the penny. I have no idea if ending the penny has actually happened, or if it was just an announcement that has never gone into force.
It's technically a thing that Congress has to implement to have teeth. The executive has the power to order the mint to produce an appropriate amount of coinage but not which coins (and bills) to make. That's Congress. In theory Trump can tell the mint that the appropriate amount is 0 but that's probably challengeable in court (like everything else). Someone would just have to file a suit arguong that in fact there is a need for pennies and the appropriate amount is not in fact 0.
Make the appropriate amount a very limited production run every year for collectors and archivists and you'd have a functional discontinuation of the penny that'll be much harder to challenge.
I think the fact is, most actual good policy ideas that could be implemented at the federal level either require an act of congress, or a lot of hard work and planning at the executive level to set up. Since the GOP, despite having a majority, does not seem to be passing any other bills because they are focused on the big tax cut, and anyone smart and capable and able to do long-term planning does not seem interested in working for the Trump administration, here we are.
I disagree with this -- if the goal is to cripple the power and influence of universities and other scientific institutions (because they're left-coded) then this is a competent and well designed set of policies.
I guess one question is whether going to college makes you more liberal, or whether being more liberal makes you likelier to go to college, i.e. which way does the causal arrow point. If the former, then reducing the size of the higher education sector can make the country more conservative and push policy to the right, but I'm not sure if the former is in fact accurate.
The thing I find odd about all this is the lack of internal opposition to Trump's oddball beliefs across the board. Is there not a single Republican Senator or Congressman that is gung-ho science supporter, does not like Russian invading Europe countries, DID understand what he was taught in Econ 101 about tariffs, understands how valuable legal immigration is.
I guess that in order to BE a Republican you have to suppress knowledge that cutting taxes that create deficits are detrimental to long term growth and not to understand that taxing or regulating negative externalities promotes it. But it's sort of amazing that the party can be so united around the anti-growth agenda, only the tax angle having the redeeming virtue of transferring income to rich people.
Chimpanzee politics? Maybe they’re waiting for their present leadership to exhibit a little more weakness before they make their move. They can’t all be on the same page, can they?
It's going to take every fiber of our being to welcome these prodigals back into the fold when they finally find that it's safe to attack MAGA publicly. The urge to tell these weak, cowardly bastards to go to hell will be so overwhelming. Dentistry will be a growth industry, due to the dental damage caused by such intense teeth gritting.
Also now there's the threat of not just a Trump endorsed primary but a Trump endorsed, Elon funded primary challenge. THAT has real teeth for some politicians. The game theory suggests the individual best thing to do is shut up unless you can be sure you're not the only one publicly in defiance.
From their perspective, they see it as their version of the "ethic of responsibility" or the "slow boring of hard boards." Opposing the admin will just get you kicked out of politics and replaced by someone who genuinely shares their anti-science beliefs, which doesn't help anyone. Better to just try to influence and moderate the administration privately and from within.
It's a rational thought process, but it also goes to show that there are limits to Weberism. At some point, you do have to decide that compromising isn't worth it and that it's better to help the other side. But where that line is exactly is a judgment call.
I'm sure some of them would survive, but all of them - 100% - would be at high risk. The unfortunate fact is that more than one half of high-propensity GOP primary voters (which probably equates to more than 20% of the country) are fully and wholeheartedly in support of right-wing authoritarianism (though not all of them have intellectualized that belief system).
I am 50 years old and, all my life, it’s the one thing that absolutely every Republican administration is certain to do. It’s one of the only real accomplishments of the first Trump administration, most of the rest was just noise.
I was calling the 2017 tax cuts for the rich and deficits act the “Tax Cuts for the Rich and Deficits Act“ When Matt was still at VOX. :) (I’m not claiming plagiarism. :))
You've kind of listed my red lines where I can't make any compromise with the right. That plus relitigating the 2020 election. Take away those and I'd still probably be more on the right than the left.
Of course many Republican office holders hold such views and are happy to tell Chuck Schumer everything about them in the gym. But the cowards will never stick their necks out in public.
It’s more than the making of scientific pronouncements about political topics. If you look through grant proposals you’ll see an incredible amount of needless groupthink influenced buzzwords. The grant writers know that if they propose a study of statin use during menopause that had less of a chance of being funded than a study of statin use in menopausal lesbians of color.
It seems somewhat similar to a kid who gets his first corporate job and starts spewing corporate buzzwords because he thinks that’s how the cool kids talk.
That said I’m not really sure how it all came to be. Why did we have senior academic leaders openly using “white males” as a pejorative? How did trying to be inclusive end up excluding groups that are central to the continued funding of the organization?
Are you a scientist? There's no way the claim you made about which grant would be more likely to get funded is true. It is plausible, even likely, that discussion of how your grant will help marginalized communities will make it more likely to get funded, and even a design that added additional focus on them would probably help. But broad research is just much more significant and fundable than narrow research.
I am and it is. Grants are so competitive that you essentially need a perfect score from a review panel just to be considered for funding. If there is one person on the review panel who thinks that science has to be anti-racist or promote equity or whatever, they will spike your proposal. But the reverse is not true because very few people will spike a proposal because it has that language. Moreover, funding agencies *require* sections of proposal that all but insist the use of such language. Prior to literally last month, if you wanted to get funding from the Department of Energy to research photonic coatings to improve the efficiency of solar panels, you literally had to include a section entitled "Plan for Promoting Inclusive and Equitable Research" explaining how that research would promote diversity and equity in science.
“…if you wanted to get funding from the Department of Energy to research photonic coatings to improve the efficiency of solar panels, you literally had to include a section entitled ‘Plan for Promoting Inclusive and Equitable Research’”
That obviously shouldn’t have happened. How, then, to prevent it from happening again?
The problem here is that grant funding is a proposal-writing contest and if the board gets two proposals to research photonic coatings to improve the efficiency of solar panels, then they can't pick which one on the basis of which line of research is more likely to be fruitful (because every single expert on efficiency of phototonic coatings of solar panels is almost certainly a current or former member of one of the two research teams and therefore will definitely say "my current/former team"), so they need some other tie-breaker and that's inevitably going to be essentially a writing contest.
So if it isn't a contest on "how well can you write a 'Plan for Promoting Inclusive and Equitable Research'?", it's going to be on "how well can you write a 'Plan for Promoting Great Scientists of History in our field of research'?" or some similar anti-woke thing. Or it's going to be "who is the more prestigious university?". Or "who asked for the least money?"
In some cases, there are people who are sufficiently qualified to judge between specific proposals on a technical level (a good example here is telescope time in astronomy - there are lots of people working for the telescopes who can judge between the different proposals as to what is the most scientific gain for the least telescope time), but for a lot of research, the only people who really understand it well enough to do a meaningful comparative cost/benefit analysis are the researchers themselves and maybe one or two competing research teams.
>So if it isn't a contest on "how well can you write a 'Plan for Promoting Inclusive and Equitable Research'?", it's going to be on "how well can you write a 'Plan for Promoting Great Scientists of History in our field of research'?" or some similar anti-woke thing. Or it's going to be "who is the more prestigious university?". Or "who asked for the least money?"
Or just looking at their track-records and determining one has a higher chance for success.
Which is a great way of shutting out new people from doing research.
I’m not claiming that they have found the best possible approach - merely that there is a real problem and in many cases they’re probably going to use something that is good in itself but orthogonal to the research itself.
Broadly I agree that this is a good idea, but people are reluctant to give up the idea that they can make effective distinctions (on the reviewer end) or the idea that they have ultimate agency in their own lives (on the proposer end), even when it's clear that they can't and don't.
Canada has a system that functions essentially that way. Researchers are more-or-less ensured a baseline level of funding so long as they meet certain criteria and submit coherent proposals. They then compete for funding above and beyond that in a more merit-oriented process.
Because Americans seem to suffer from a cultural delusion that it's better and more "meritocratic" to allocate random chance via over-optimized competition than explicit lottery.
Seems to me a good way to convince them is to choke off all of their federal funding until the research institutions publish a policy forbidding any consideration of race or sex outside what is scientifically justifiable, and forbidding any other useless distinctions, e.g., equity.
Sure, Congress can---and does---attach all kinds of requirements to funding. But what is happening now is lawless, unaccountable, unilateral withholding of appropriated and awarded funds. It is a direct assault on the constitutional system of government and is in no way justifiable because some people don't like how some taxpayer money was being spent at some universities by some researchers.
I don't think "we're going to force the intellectuals to conform to the ruling political ideas" is either a good idea or one with a successful track record.
Similarly, for NSF proposals you have to have a Broader Impacts section (this is mostly a good thing) and until recently in computing you needed to talk explicitly about Broadening Participation in Computing. But this didn't mean that panels actually preferred research about "woke" topics over more conventional topics, just that it was a disadvantage not to have something to say about BPC.
The definition of "woke" is important. If your broader impacts discusses plans to engage K-12 students and it does not include language making it abundantly clear that the schools you work with comprise a diverse population of historically under-represented, under-served students, etc., you will hear about it from the panel.
I'm not saying that you have to write an essay about how math is racist in order to get funded. I'm observing that I have reviewed and submitted zero federal grant proposals that do not directly identify a group of people, use adjectives that convey their historical inequity with regard to science and then enumerate a plan to help mitigate that inequity.
Based on this observation I assert that you cannot, in fact, write whatever you want in a PIER plan or a Broader Impacts section. If you do not explain how your esoteric, highly specialized scientific investigation will "promote inclusive and equitable research" your proposal will not be selected for funding. I don't know how else to characterize that other than compelling the use of "woke" language.
I assert that many people in the scientific community have internalized a specific philosophy (that I am labeling woke) to the degree that they have lost sight of the basic fact that you cannot run a successful research program at an American university without familiarizing yourself with specific language and including it in your proposals.
For a bit of context: I moved to the US from Europe and the first thing I had to do was ask my American friends what a diversity statement was and how to write one.
It's also the case that different agencies handle these issues in their own ways. At NASA, the Science Mission Directorate was starting to require Inclusion Plans for some proposals. While some people were worried that this was going to be all about esoteric DEI and woke academic ideology, it was actually really basic stuff like "If you are working with grad students, are they going to have the chance to present this research at some point? How are you going to make sure all your team members can raise concerns? Are you going to recruit people by just asking your two closest friends, or will you at least try to open up the search a bit?"
Sure it might be a bit of micromanaging, but what aspect of proposing for federal grants isn't like that? The basic goal was to get PIs to think a bit about how to involve people and have a functional research team. Nothing about race, gender, etc was required, and in the several reviews I participated in, none of the proposers were dinged if they just focused on team activities and didn't address the DE part of DEI.
I talked to both proposers and reviewers from the more conservative, DEI-skeptical side of the scientific community and outside of a single early review that was handled badly, they came away comfortable with what was happening.
Just like all areas of political culture, the worst excesses of any larger effort are always the poster children for the whole thing. In reality, the bulk of the people in large bureaucracies are just trying to do a good job to meet the agency goals, whether it is health, science, technology, etc.
“…in the several reviews I participated in, none of the proposers were dinged if they just focused on team activities and didn't address the DE part of DEI”
Did any proposals get dinged if they did include DEI?
Right I agree with basically all of that, although we probably disagree somewhat about the merits of it. My point is that you can say "I'm going to research the impact of metal homeostasis on viruses, and I'm also going to give some talks in the most disadvantaged local elementary school" and that's more likely to get funded than "in going to do research on how viruses are mostly bad for minorities".
More broadly, science funding in the US is in many ways best characterized as a proposal writing contest, and this is an aspect of that more than a change in what's funded.
I am not defending the ongoing assault on academic science and I'm not saying that we deserve what is happening. I also doubt we disagree on the merits. I am a huge proponent of outreach to under-served communities. Having benefited from it myself, I organize and engage in it as much as possible.
If we disagree at all, it is on the distinction between the technical merits of a proposal and the broader impacts. I don't think you can separate them and I do think that proposals that decorate their abstracts with woke language are more likely to be funded.
I don't oppose any of the activities and I think it's a good thing that federal funding encourages broader impacts, etc. If I sound exasperated it is because I feel that the funding agencies have been colonized by an activist mindset that thinks it is ok to rob us of our agency to determine how we want to contribute to the broader community. Having to type out the words "Plan for Promoting Inclusive and Equitable Research" and then populate it with a bunch of stuff I don't believe is offensive to my liberal principles.
What area of science are you in? I'm in biomed/phys/chem and that's not my experience at all. But I appreciate that DEI has been adopted more heavily in some communities than others (my impression is that astro is pretty into it, for example).
The materials version of your field---but it doesn't matter because it just takes one reviewer who is really into "all of that stuff" to spike your whole proposal because they don't think your Broader Impacts is up to their standards.
> Broader Impacts section (this is mostly a good thing)
I'm only n of 1, but I give zero weight to anything DEI in the broader impacts. But like you say, panels that I've been on are still judging proposals almost entirely by the science.
Indeed, there have been a bunch of programs funded to explicitly attempt to address inequity of various kinds. Obviously one can debate whether those things are good to address, or whether the funded programs are effective, but that's very different from suggesting that everything funded by the NIH today is "woke".
Yeah, those plans were really annoying. I would say there's a difference though between a proposal in which that inclusion/equity is core to the research (a few) and one in which it's an add-on (the majority). And I can tell you that for the majority, a lot of scientists have been using chatGPT to help put them together.
So again, I want to distinguish between blaming scientists for being too "woke" (not unheard of to be clear) and blaming us for following the incentives created by politicians and the administrators who were hired to implement their directives.
I agree that we mostly just accept the constraints of the proposal writing competition and evaluate the core ideas being proposed and the track-records of the people proposing them. If they made us write grant proposals in crayon, we'd do it. I resent the abuse of those incentives, whether they be making every grant have the words Quantum and the acronym AI in them or narrowing what counts as broader impacts to specific social justice outcomes. And now I'm hopping mad about it because it's created a paper trail of abstracts with ridiculous woke language for completely unrelated research and we're being beating over the head with it.
There is absolutely nothing wrong with funding research with social justice components or that are primarily aimed at DEI goals. Nothing. It's the indefensible absurdity of forcing it on everyone else that is wrong. Not the goals, mind, you, specifically the language. A proposal about developing materials for quantum computing that mentions racial bias reads---correctly---as Orwellian and discredits the entire enterprise.
“ It is plausible, even likely, that discussion of how your grant will help marginalized communities will make it more likely to get funded, and even a design that added additional focus on them would probably help.”
It doesn’t sound like you’re disagreeing with what I said.
No, actually, what research you do, and what you say about the benefits of that research in your grant proposal are two very different things. Also, broad research is more likely to get funded if it also attends to the interests of narrower communities is very different from broad research is less likely to get funded than narrow research.
Aligning your work with the priorities of your funders is a game grant writers have played since time immemorial. If you want research aligned with your new and different values, state them clearly and do a good job of analyzing the proposals.
The subtle, but important, distinction is that the woke spin is not part of the *technical proposal* where you write inscrutable jargon only other experts in your field understand.
The woke spin is confined to the sections where you are obliged to describe how awarding you this grant will lead to "broader impacts". Although it can contain boilerplate stuff about writing papers, giving talks, organizing symposia, etc. it should also explain how you will engage with the broader community and whatnot. *That* part is where the woke language creeps in.
The insidious part is that the language of the technical proposal and broader impact are merged in (public) summaries, making it appear to the fine young asshats at DOGE that we're pouring money into scientific research that is being diverted to help teach kids with gender dysphoria how to read the Periodic Table. And the propagandists in right-wing media are all too happy to amplify this misinformation to manipulate the public into thinking its actually a good thing to cripple scientific research because it's not even real to begin with, just another part of the Big Woke Conspiracy.
But, as Matt often points out, you could define the marginalized groups economically and it would disproportionately help minorities but without the deeply unpopular racial baggage.
He’s saying that the “help marginalized communities” part isn’t done by limiting the scope of research (to “menopausal lesbians of color” in BZC’s droll formulation), it’s done in other ways.
I don't endorse BZC's rhetoric, but I saw it as an exaggeration meant to make the point that the scientific community became (still is?) obsessed with identity politics to the point that research proposals adopt the language of politics rather than science. And, yes, I do see Sam's response as refuting a specific thing in BZCs comment and then agreeing with what I took as BZCs meaning.
I think there’s a difference between doing research that’s broadly useful but framing it as beneficial to certain groups in order to get funding, and doing research that really only is targeted at certain groups. You may not agree that’s an important distinction, but if you recognized Sam was making that distinction, then it was uncharitable to treat him as being inconsistent, as you did in your comment.
The lack of precision from the more dedicated anti-woke commenters (and I say this as someone who’s very woke-skeptical) is striking. I still remember the person who refused to believe that using the word “negro” in its historical context wouldn’t get you fired from academia.
And I’m here in academia telling you that this is not the case. If you show me an example of it happening, that will be much more than the other guy managed to do!
All I can tell you is there are a ton of gratuitous invocations of race, gender, LGBT, discrimination, etc., in journal articles and have been for years. I think it is fair to say including or centering that stuff helps get you published.
I'd say it's a little more complicated than that (isn't it always?).
Scientists are certainly not immune from groupthink. But in terms of proposal reviews, it's more boring than you claim; I know if I reviewed the proposal you suggest, I would take points off for it being too narrow. Instead, the way proposals like that get funded is often through targeted programs rather than general calls. Here's a random example I quickly found: https://grants.nih.gov/grants/guide/pa-files/par-22-186.html
Scientists respond to incentives like anyone else. If there's specific pots of money for these things, we'll figure out how to go after them.
“ if they propose a study of statin use during menopause that had less of a chance of being funded than a study of statin use in menopausal lesbians of color”
This is just absolutely false. If you propose something and say it affects a smaller group rather than a bigger group, that isn’t going to help.
But if you add a paragraph about how your broadly beneficial research will have particular benefits for underprivileged groups, that will help it at some stages of the process.
An unfortunate quality of universities (and many large bureaucratic organizations) is how they have evolved into Rube Goldberg machines that deliver robust outcomes through inscrutable processes that are easy to criticize and hard to defend.
Imagine the mid 20th Century when the then-largest generation in US history---the Baby Boomers---were about to start graduating high school and entering the workforce. There was no way for the economy to absorb them, so a conscious decision was made to build out state university systems as a sort of flow-control. Among the many consequences of this shift were an increase in careers available to women (e.g., professor), an increase in the production of nerds (e.g., to start tech companies and work for NASA) and the codifying of the government-funded research model that grew out of World War II. Fast-forward a bit and the boomers moved through the university system, leading to dropping enrollment. Many state universities started to evolve into research-focused institutions, leading to the virtuous cycle of educating nerds and producing innovations that those nerds could implement in the private sector to create the modern world of information technology, pharmaceuticals, materials, etc. That evolution, however, came with a huge cost: a modern R1 university needs to maintain tens or hundreds of millions of dollars in infrastructure to support research in the natural sciences. Without that infrastructure, universities would not be able to recruit and retain the talent necessary to train the next generation of nerds an the whole system breaks down.
While purchasing expensive equipment typically involves special infrastructure grants, maintaining it requires a lot of people, from highly specialized technicians to highly generalized facilities workers who take care of everything from leaky pipes to sophisticated redundant power systems needed to keep sensitive equipment (e.g., that requires cryogenic conditions) from failing every time the power goes out. That is what overhead pays for: facilities and administration. It pays for, among other things, the staff needed to ensure grants are submitted properly (which is non-trivial), funds are spent correctly, equipment doesn't break down, labs maintain proper airflow, etc.
Large state universities have become self-sustaining through tuition, donations and overhead. Often they are only state universities insofar as the state owns the land underneath them. Congress used to recognize the vital role universities play in educating the workforce and so made the process of adjusting overheads slow and deliberate. Overhead caps are not fixed, but when they have changed in the past, they have done so with plenty of warning and buy-in from the universities. Mandating sudden cuts to overhead and then enforcing them by threatening employees at NIH and NSF is illegal and should be blocked by the courts, but none of that matters because the current rates of overhead from active grants are already accounted for in the budgeting processes. State legislatures can not and will not just hand over a bunch of money to make up for the shortfall and private universities can not and will not sell off a bunch of assets.
Universities across the country are already rescinding offers and freezing hiring of new faculty and admission of new graduate students. As Matt points out, the consequences of these actions will be felt later (meaning whomever is in power at that time will likely take the blame) and they will be amplified by the other mindlessly short-signed policies. If you cripple university funding, you lose faculty and teaching capacity, meaning they educate fewer nerds, meaning fewer people become scientists, engineers, doctors, economists, etc. In a few years, when there is a shortage of home-grown doctors, there won't be any skilled immigrants to fill in the gaps. Less healthcare will be delivered, meaning a slowdown of a huge part of the economy. As China overtakes the US in key areas of technology, there won't be enough American labs producing enough homegrown computer nerds to counteract that trend and we will spiral into technological irrelevance. Meanwhile there won't be enough nerds to measure all the wild fluctuations in the economy, leading to more economic chaos and slower growth.
None of this is easy to explain to people who are already disinclined to care and especially those who have bought into the idea that America should be more like China 30 years ago, manufacturing low-value goods and throwing its weight around regionally. It also does not help that so many scientists publicly beclowned themselves at the height of woke panic and made us vulnerable to politicization.
TL;DR the attack on science is way worse that it looks and scientists have already moved into a mindset of managed decline.
I agree with basically everything else you write, but I have not seen the "flow control" claim before. Do you have a source I can read about that. And there wasn't an enrollment drop post-boomers, instead there's been a continued expansion (with maybe some post 2008 retrenchment).
Philip Bump's book The Aftermath makes a compelling argument complete with references. He tracks their progress as they aged, pointing to massive investments that followed, like the creating of the disposable adapter industry and the rapid construction of elementary schools and then points to all the knock-on effects. And I think it is logical to assume that this pattern didn't just stop when the boomers started turning 18.
The fear that the boomers aging into the workforce would disrupt the labor market was very real and there many proposed solutions and the GI bill had already demonstrated a similar effect. While I'm not sure you can point to a single document laying out the exact plan, it is not a coincidence that so many state universities feature brutalist architecture. It is a reflection of the era in which they were rapidly built out.
It makes total sense as there was a great fear when the war ended about how the economy would absorb all those GIs. The GI Bill college funding provision certainly reduced the volume of men reentering the workforce.
Many commentators (MY and in this thread) try to connect current cuts to US science to some particular political stance. In reality, authoritarians commonly cut science as a threat to their power, often coupled with pseudoscientific beliefs and reliance on native born scientists. The only truth that can be allowed comes from the mouth of the leader. Soviet Union had Lysenko-ism (destroying basic biology), the Chinese cultural revolution sent intellectuals to pig farms (or killed them). Hitler banned "Jewish Science" i.e. relativity. Of course, a few areas were allowed to progress (rocketry in Germany / USSR) but only with very specific political goals in mind. This is not that hard, people!
Imperial Germany under Wilhelm II, the Brazilian military junta and James I were all great for science, there is no absolute rule about whether authoritarians are good or bad for science.
You don't even have to go that obscure. The People's Republic of China as it currently exists is hardly a bastion of free thought and yet it is either at or racing toward the technological frontier on essentially every relevant hard science domain. I think a more coherent case, rather than 'authoritarians = bad science', is that ideology is bad for science and ideological authoritarians have more leeway to punish academics for deviations from the state ideology.
You see this today in China where research of certain areas in the social sciences is absolutely forbidden, but it just happens to be the case that those areas don't really matter. And ideology is bad for science in the US too, as we've seen, but it's a little easier to get away with being heterodox when the orthodoxy's power ends at social and professional sanction rather than imprisonment.
That is the theory and is probably true of theocrats but I think in Nazi Germany and the Soviet Union a preference for applied science was mainly motivated by anti-Semitism and Jews being over represented in more theoretical areas.
Without getting into a discussion about the extent to which you can compare James I with the regimes I mentioned, the concept remains that attacks on science commonly reflects more on the ruler than on the scientific establishment.
"...the extent to which you can compare James I ...."
On the one hand, he did support the most up-to-date, cutting-edge witch-hunters.
On the other hand, he wrote a long treatise about the dangers of smoking tobacco:
"A custome lothsome to the eye, hatefull to the Nose, harmefull to the braine, dangerous to the Lungs, and in the blacke stinking fume thereof, neerest resembling the horrible Stigian smoke of the pit that is bottomelesse."
I think you mean "totalitarian", not "authoritarian". A totalizing revolutionary ideology (Nazis, Soviets, Mao) can't tolerate alternative sources of truth. A dictator can be fine with it, though.
the soviet union had an informal arrangement with their math community where if you kept your head low, you could go about your research more or less unmolested. of course, the politics within the community itself were quite vicious. (once you got past the gatekeepers, who would sometimes sink jews and other minorities in admissions)
So “the singularity” is coming, it’s going to change everything, and the people with power think the most important thing we can do about it is… fire a bunch of low level paper pushers, potentially cutting something like 0.5% of federal spending.
I’m not sure I get it. Shouldn’t the world’s richest man have higher priority things to do?
That’s fine, and I do think AI has the potential to massively reduce the number of people needed to perform government administrative work. But I don’t see how or why progress in AI models justifies or necessitates any of the administration’s actions.
If anything, if there’s large scale private sector job loss on the horizon, I think we’re eventually going to need more government employment and/or more redistribution. We can’t all be personal trainers, aged care workers, and tiktok stars.
Personally I don’t think it has anything to do with AI. 99% of it is just Elon’s drug use, bipolar disorder, autism, being way too online and the long term effects of being the most financially successful person in the world. He’s just very high on his own supply.
Yeah, I think I agree with that. But if Musk really does think we’re on the verge of a new world, I would expect him to have some kind of theory of how his actions are connected to that new world. Even if it’s a bad theory.
It makes sense if you think the idea is to get as many of Those People out of the government as possible for when the times comes to decide how to parcel out God's rewards.
Ummmmm... the scientific community bent over backwards to try to offer neutral statements about various findings (and indeed, there's almost a culture of shame when someone gets too big for their britches and does "science by press release"). But the Gingrich/right wing assault on science has been vast, and at some point people realized you have to fight back and not just roll over for the sake of neutrality when well funded organizations tell lies about your work. Because now we're in the endgame... fewer foreign researchers will come to the US, fewer research projects will be done, and the breakthroughs that could've happened now will not.
I’m going to disagree a bit. People who go into climate science tend to have a certain worldview. And that worldview includes a strong belief that saving the environment requires people to sacrifice and do without. But that makes the findings needlessly political.
They can and should say - the data indicates this is happening and to the best of our understanding this is how bad it could be.
And then they could list out possible mitigation strategies. Mitigation only - here are the pros and cons. Geoengineering - pros and cons. Vastly expanded nuclear - pros and cons.
But absent from that should be the aesthetic presences of the researchers. If they like to ride their bike to work, great! Don’t let that influence the research.
If this is the solution to climate change and it horrifies the researchers - just take the win and let it go:
I knew two climate scientists grad students who were quite dismissive of the pop sci narrative and the science celebrities grifting off of apocalyptic narratives. They absolutely believed in climate change as a serious issue, but as experts in statistical modeling they also recognized that there are a lot of simplifying assumptions in every model as well as uncertainty in extrapolations to the future. Moreover, they were offended by scientists-turned-celebrity figures in the media who eliminated all rigor and nuances in their public speaking. If anything, those pop sci individuals did a poor job representing the expertise and challenges of climate science.
Yes! That’s a very good point. I don’t have the xcld comic handy but there is often a huge disconnect between what the study or report actually says and how the media reports it.
99% of the time I read takes about how members of [x] profession or field have such-and-such worldview, the take-giver doesn't actually know anyone from that profession or field, let alone a representative sample, and only knows about people who get quoted in the paper.
Public health officials putting out statements in favor of protests was one of the most mind boggling things I've ever heard. Even five years later I have a hard time believing that actually happened.
Do you remember the political attacks on Michael Mann, the researcher on the "hockey stick" graph? That's what I'm referring to.
I'm not saying what you're describing *doesn't* happen but Matt Hagy below describes this nicely... it's a consequence of our media infrastructure amplifying the most extreme voices. There are way more reasonable scientists communicating the way you describe, but they get drowned out.
Also, most scientists get into a field because they see *a problem* that needs to be addressed. Climate change is a pretty big problem! So is cancer research, so is energy research, etc.
Finally... in pharma (where I work) there's a pretty broad diversity of political stripes. There's probably as much disdain for the far left as the far right. But the far right has way more power! So even the conservative types have ended up being very anti-Trump.
That’s another important point as well - climate scientists being conflated with climate activists. And then of course the dangers of scientists becoming activists.
There's nothing wrong with being an activist! But as you rightly point out, if your activism (a) repulses more people than it persuades and (b) is wrong on the policy merits, you're doing way more harm like than good.
I would go further and say that scientists should be activists for the same reason anyone with specialized knowledge should advocate based on that knowledge. You can make the case that the very purpose of tenure is to allow academics to be activists without fear of being fired.
The problem, as I see it, is when scientists start asserting that science is on their side. You can be wrong on the merits and advocate for stupid policies, but as soon as you invoke the "in my capacity as a scientist I assert that science supports my argument" you drag "the science" into the poisonous group dynamics of modern discourse.
It's a tough line to walk, because in many cases the science *is* clear about one side of an argument over the other (like, yes, vaccines are good and measles is bad), but other times it's way more muddled. Unfortunately it's usually the latter where people try to argue from authority rather than evidence.
When you have scientific journals and popular science journals endorsing political candidates, it makes scientists look partisan rather than scientific. Also, some science societies have put scientists on a pedestal as far as advising policy rather than explaining the science within the limits of science, and letting policymakers make policy while asking for what science can say to policy. That has led some to a view that scientists are partisan.
I think the polarization of science is overstated, including by Matt Y in this piece. Look at the Pew numbers from October 2024 -- yes scientists took a hit after covid but are still one of the most trusted professions:
the press releases are often (well, always, as they have to be) a simplified version of the actual scientific studies, and sometimes written by press releasers rather than the scientists. Every once in a while I read both and wince.
One thing to know here is that many people in tech think that academic research in science is basically useless. This is downstream of some actually true things, like that academic AI research is not on the cutting edge of building big AI systems and that most drug discovery research doesn't work out when you try to commercialize it. That sort of thing, plus things like the replication crisis in social science, have led some people, probably including the tech executives who staff the futurist right, to skepticism broadly about funding for science, which is totally off base.
What irritates me about that attitude is the double standard: in the creative destruction of capitalism it is a good thing that people can try dumb ideas only to have their startup fail. But every scientific paper that does not describe something with immediate utility that is obvious to everyone is a huge waste of money. Never mind that your startup employees my former students.
As I understand it, Pfizer understands that (it’s constantly acquiring NIH-funded startups) but OpenAI doesn’t (it’s just hiring good software engineers). Differences between industries and companies.
I think there’s a more charitable way to put some of their concerns. Academia is set up in individualistic ways that encourage individuals to pursue their own crazy views, find out some neat things, and move on while making a name for themself. While collaboration is on the rise (particularly in biomedical science and big physics) you still put the names of the individuals on the publication, rather than doing it officially as a company or institution working to some collectively agreed upon goal. Academia is great for showing that some effect exists. It’s not great for figuring out how to use that effect in an economically viable way to make the world a better place.
But even academic engineers are doing something different than corporate engineers. Every academic has a personal website announcing their personal research projects. Very few corporate people do.
It's absolutely stunning the way the Trump administration took some reasonable critiques and ran with them allll the way off the wingnut cliff.
Yes, the Democrats/progressives went too far with "men are toxic" --> let's revel in our support for outright a-holes, misogynists, and rapists!
Yes, Democrats went overboard with DEI stuff --> HULK SMASH ANYTHING RESEMBLING DEI!
Yes, science has become politicized and full of liberals --> HULK SMASH SCIENCE!
Yes, some government spending is bloated and inefficient --> let's feed government agencies into the wood chipper and joke about it on Xitter!
Yes, illegal immigration is unpopular and Democrats should have done something about it sooner—> let’s disappear some legal immigrants to an undisclosed location in Louisiana or a sh*thole prison in El Salvador!
Some people are actually concerned about folks on the other side of the political spectrum being too extreme. Other people are actually extremists themselves who see extremism on the other side of the spectrum as a positive good and an opportunity to empower themselves to run wild.
One thing that blows my mind more than anything else about the Trump administration is the alienating of potential allies, both foreign and domestic. My wife's cousin genetically modifies wheat. Previously, his political enemies have been the anti-GMO left. A few weeks ago, he posted something on Facebook about a bunch of people he worked with at the USDA getting fired out of nowhere and how that's going to make his job more difficult. I'm definitely open to the idea that there's spending to be cut at the USDA, and I definitely don't think "give farmers whatever they want" is good policy. But it's crazy to me the way they're missing lay-ups and alienating one of the few branches of science and academia that should be friendly to them; farmers threatened by environmentalists.
I am in this field and have three not-very-well publicized facts about indirect rates to share:
1. They are typically expressed as a percentage of the direct cost, not the total grant amount. For example, if the direct cost is $100, and the negotiated indirect rate is "60%" then the indirect reimbursement would be $60, which then makes a total grant of $160, so 60/160=37.5%. It's so stupid that scientists let the accounting shorthand become the number we discuss because it makes it seem like huge amounts are being siphoned off when it is actually about half as bad as people think.
2. The amount of indirect costs that can go to administration is capped at 26% (of the total grant) and as far as I can tell, universities were already hitting that cap circa 2010, before DEI really took off. So the incentives are not really as aligned as people may think for more indirects to lead directly to more administrators.
3. Research is expensive! Even with indirect costs, universities lose money on doing biomedical research and fill in the gaps with undergrad tuition, clinical fees if they have a hospital, state funds if they're a state school and philanthropy if they can find any rich patrons.
There's a great book for the wonks on here about university finances by a former chancellor for finance of UC San Francisco. It's published with a CC license so is freely available here: https://escholarship.org/uc/item/59p124ds
The second Trump administration is worse than I expected. I wonder if they are trying to cause a recession. Maybe the inner circle think four years of continuous expansion are impossible, and want to get their recession out of the way early, so that it will be morning in America in 2028. This worked for Reagan, though I’m not sure how much his administration coordinated with Volcker, whose policies at the Fed caused the 1981 recession.
I don’t wanna assume Trump, Musk, et al. have much in the way of foresight or strategic thinking, but if we offer them this charity then we could see them attempting to engineer a recession where the increased unemployment is concentrated among committed Democrats and blue states. That would include their defunding of scientific research and universities in general as well as NGOs. The impacted individuals would cut back on consumption, suppressing aggregate demand, and thereby putting downward pressure on inflation. Ultimately that could allow the Fed to cut interest rates. From the perspective of Republican leaning constituencies and some swing voters, this could be a net positive with prices stabilizing and borrowing costs for mortgages and car loans falling—without elevated unemployment among these groups.
Again, I doubt the administration is engaged in, or even capable of, conceiving and executing such a plan.
Chris Rufo's preoccupations are a better guide to the regime's thinking on this front, and he is not an economic thinker at all, just a culture-war jihadi. He's not trying to reduce aggregate demand in college-towns, he wants to crush universities because they champion enlightenment values, and those lead to secular liberalism.
Interesting, but the likelihood that Trump has the patience to wait out 6 months to a year of economic collapse is negligible, unless he really is totally and completely checked out.
Trump might plausibly think “I understand my immigration and tariff policies could cause short term pain, but that won’t really hurt my party so early in the election cycle.”
Carter brought in Volker in ‘79 as the realization that a serious recession was needed to tame inflation began to be accepted. If you look you’ll see rates began rising in 1980 (Reagan took office in 1981). There was opposition within the Reagan administration and fear as to the depth of the recession. But to his credit Reagan said, “We’re doing this.” And it worked.
Obviously a techno futurist worldview should logically indicate pro-science positions, but shouldn’t nostalgia for the 1940s-50s post war boom also include nostalgia for great impactful science? Why are these cohorts all happy to take on the religious right’s position of anti-science?
They're just a flailing bunch of morons. Nothing, and I repeat NOTHING, is thought out or coherent except their purges of Trump's enemies (THAT part is impressively thorough and devious). Not their foreign policy, not their economic policy, and not their policy on science. It's all memes and impulses and emojis and lib-owning and making it up as they go along. It doesn't help that the guy in charge is an obese near 80 year old with the diet and sleeping habits of a teenage club hopper.
Some of the signature views of the American Christian right include teaching creationism in schools and opposing stem cell research. Related views that may be considered oppositional to scientific consensus are their opposition to abortion and comprehensive sexual education.
The first sentence is absolutely and demonstrably untrue in the United States to an absurd degree. I genuinely don't know how you could think that unless you are in a bubble where almost all the Christians you know are highly educated Catholics and mainline Protestants.
Even among Catholics, who have absolutely zero doctrinal obligation to believe in creationism (and indeed one could argue that continued belief in creationism is heretical at this point), 32% subscribe to a hard creationist view that humans were directly created by God within the last 10,000 years.
Yeah I think that statement is mostly correct. Those are moral objections, not disagreements about science. People who think harvesting stem cells is morally impermissible aren't denying the scientific value of doing it. I wouldn't say it has *nothing* to do with science (because science is what it effects), but the disagreement is not a disagreement about science.
"defense" covers a huge range. Like, government defense funding literally enabled the internet. And it's really not hard to see how AI is relevant to defense.
Silicon valley literally would not exist if it weren't for government funding, and not just because of "inventing the internet". Google is one of the more famous ones; there was a whole stink from progressives in the early aughts about the fact that most of the search engine tech was invented while they were at a public University.
Fantastic post, thank you for writing this! I have to get to work, but a couple of thoughts:
1. Do not ever apologize for writing "Trump is bad" posts. Trump *is* bad, and it's ok to say it! Of course I get you don't want to write those posts every day (plenty of other Substacks specialize in this).
2. Exactly right about NIH indirect costs: is there some bloat? Yes. Could they be negotiated down? Most likely. Is slashing them to 15% a good idea? No, it effing isn't! It's not like all this money is going to "Assistant Vice-Provost for DEI" or some such bullshit, the majority of it goes toward actually useful things that are necessary to make the university run.
Fun fact: I spoke with my department chairman (at a large, well regarded, public R1 university - R1 means a university that specializes in scientific research, in contrast with a predominantly teaching university). I asked him what would happen if the NIH did cut indirect costs to 15%. He said there's absolutely no way we can run the department on 15% indirects. What would we do then? I asked. There was a moment of awkward silence and then he said something about "considering various options" that came across as "it would be very bad, but I don't want to tell you the specifics because I don't want to scare you."
Please note, a lot of the funding goes to admin support that is NECESSARY to comply with important federal regulations! Anyone who works with biohazardous materials (pathogens, recombinant DNA) must comply with biosafety rules, anyone who works with human volunteers must comply with IRB (Institutional Review Board) rules, and anyone who uses vertebrate animals, like lab mice, must comply with rules for ethical treatment of said animals.
You can't just fire a bunch of biosafety and IRB and IACUC (Institutional Animal Care and Use Committee) admins and expect everything to go just fine!
I swear, if the indirect cost cut goes through, I will be sorely tempted to take a bag of biohazard waste, put it in my car, drive to the Trumpiest neighborhood near me, and dump that bag in the middle of a sidewalk. Here you go, folks, a little gift from your friendly neighborhood scientist! You wanted to kill biosafety oversight, you got it. (Of course I'd never do it, but it's fun to fantasize about it!)
3. Trump 1.0 championing Operation Warp Speed (one of the few things his administration did right) was a real missed opportunity for Trump to align himself permanently with scientists as opposed to anti-science, anti-vaxx cranks. Sad!
“you can only explain the influence… with regard to the fact that the technophile wing of the Trump administration believes the singularity is imminent.”
So they’ve invented their own fantasy theological Rapture, but without the tedious parts of Christianity like charity, empathy, and a belief in the sanctity of all human beings.
Apparently Christianity is making a big comeback in Silicon Valley. Could be a combo of a rightward drift and almost religious like reverence for the “coming singularity”
My observation is the opposite - the Zvi’s and Hintons of the world are observing the exponential rate of progress in AI in a very scientific and data driven way, while the AI pessimists like Marcus have an almost religious view of the human brain as being divinely touched, and thus continually make unfalsifiable claims that LLMs can’t possibly be capable of logic and reasoning despite overwhelming evidence to the contrary.
I don't think it's defensible to SURE either that current LLMs are a step on the road to AGI *OR* that human cognition is special in some kind of way and doesn't work along similar processes, at an exponentially more advanced level. Believing either with certainty can easily be classified as "religious belief;" they both rely on faith to draw a conclusion that is beyond where evidence can take us.
It's hard thing for people to imagine in the age of the internet and Trumpian politics but the best thing to do is to keep an open mind...
My theory: too much drama in the polycules, it was like if you dated a crazy person in your youth, fun for a while but then it gets old and you want just get married and be a normie.
Either a normie or the proud owner of a brand new tradwife.
We really need a word other than Christianity for "the kind of pseudo-Christianity popular among the MAGA that has piss-all to do with the actual teachings of Jesus."
Xianity?
Then I could ask questions like, "When you say Christianity is surging in Silicon Valley, do you mean Christianity or Xianity?"
It's not any Christianity I recognize.
I think this is just impossible without some real weird synthesis. The tech people I've met are so Nietzschean, their entire ethos cannot be a part of the Christian tradition. It feels absurd to say this but sincerely believe that Constantine I or Charles V had more Christian compunction and restraint than any of these tech people. They're just creatures of will.
They think they're making God on their computers!
And maybe they are but that's just simply not Christian.
Freaking heathens. What pisses me off more is that I can’t call them apostates. Only Vance and such are those.
Got any good reading on this phenomenon?
"Got any good reading on this phenomenon?"
Paul's Epistle to the Cupertinians.
You joke, but some of these tech guys could get a lot out of reading Philippians.
https://www.vanityfair.com/news/story/christianity-was-borderline-illegal-in-silicon-valley-now-its-the-new-religion?srsltid=AfmBOoqE9Z_htWhPuf-T4fgtuMYFOu0PAFBkuVlRSAyNXRQzk5B8DYJ3
Appreciate the link, interesting read.
I'll say just as sort of "warning" as someone a bit older; these "trend" pieces about how a particular segment of society or how America is turning in this new direction are very old genre of newspaper writing...and often turn out to be bunk.
Basically, a reporter wants to try to capture the "zeitgest" and write a piece interviewing a few leaders within a movement and then use this a jumping off point to say this is this new big trend in an area or in the country.
Good example is back in 2010 there were multiple trend pieces about how Libertarianism was the new rising force in GOP and America because of the Tea Party. New York Times Magazine had big profile interviewing Ramesh Ponnaru and others about how there was this big libertarian moment happening (the thing I actually remember most is that the reporter in question made sure to mention they dined on Filet Mignon for their conversation...as though he needed to do a flex about his expense account or something). Turns out Thomas Massie was basically correct https://www.washingtonexaminer.com/opinion/1881868/rep-massies-theory-voters-who-voted-for-libertarians-and-then-trump-were-always-just-seeking-the-craziest-son-of-a-bitch-in-the-race/
I think the most hilarious one of this genre is this one. https://www.nytimes.com/2022/08/09/opinion/nyc-catholicism-dimes-square-religion.html. Cannot emphasize enough how ridiculous this trend piece if if you know anything about this neighborhood.
So yeah, be vary vary weary of articles that take interviews of handful of supposed "thought leaders" and claim a new trend is happening. Will bet money when Pew numbers come out it will show that while Harris probably ran behind Biden in Silicon Valley this time around, she still got still overwhelmingly got most of the vote and that in 2028 the Dem nominee will get a vote share close to Biden's.
I lived right by Dimes square for a year! All pieces written about it massively overstate the weird underground groups that run through it. Most people are just there for the $17 glass of natural wine.
Funny that. Don't live near Dimes Square, but definitely have hung out at bars and restaurants in the area many times. So like you, I had personal experience reasons to know this article was nonsense.
Like this Silicon Valley stuff. What's way way more likely "explanation" for super high profile very rich attaching to themselves to Trump is that throughout history, super rich people have tried to cozy up to power to their own ends and that Silicon Valley being a more mature industry now then it was even 10 years ago, super rich people are just now acting like they always have.
I wouldn’t call these stories “bunk”. I would say that they report on real microtrends, but often give the impression that they are describing a whole culture, rather than one interesting and weird subculture. It’s like writing about 15th century Florence and saying everyone was a rich banker or an artist, forgetting that well over 90% of the population was neither of these things.
Fair enough.
This discussion sort of reminds me that this is all another example of the biases that happen with media overly concentrated in NYC and tech overly concentrated in SF metro area. These are both areas famous for being open to different types of cultures, ideas, subgroups etc*. So there are all sorts of microtrends, subgroups in both areas. And all it takes is the right reporter living in the right neighborhood (or hanging out with a particular group of people) to over extrapolate a very real trend attracting increasing numbers of people and claim this thing is a much much bigger trend then it really is**
* I can't be the first person to note that of the many reasons as to why Silicon Valley became "Silicon Valley", the fact that since 50s/60s San Francisco has become a haven for alternative culture has got to be one of them; a disproportionate number of people who are clustered on the high end of the "openness to something new" scale has to be part of the story here.
** Always think with these trend pieces that if you can insert the quote "Anyone who's anyone is doing XX" then you should immediately treat these trend pieces with a skeptical eye.
"I'll say just as sort of 'warning' as someone a bit older; these 'trend' pieces about how a particular segment of society or how America is turning in this new direction are very old genre of newspaper writing...and often turn out to be bunk."
And of course, this is extremely unsurprising, because not only is this kind of trend analysis rarely the least bit empirical, it takes wild swings based on tiny changes in data.
There is not really much of a meaningful difference between the temperaments of a country where Kamala Harris wins 49-47 and one where she loses 49-47, but we've all decided we have to craft sweeping narratives about how now Americans love Trumpism and we're in a new right wing golden age and blah blah blah. Just like 17 years ago we all said everything was progressive now, and the fundies had been defeated, and blah blah blah, because the Republicans had one bad election cycle.
When you drop the phenomenon from overall politics to even more nebulous questions like "how many computer coders are down with the Jesus" you are sure to get some scattershot takes.
Could not agree more. The amount of over extrapolating from one election result is nuts. Especially as you say the shift in votes is quite small.
I do think there is value when the shift in vote is a) quite large and b) seen over multiple election cycles. For that reason, I do buy that there is a trend with Hispanic background voters. The shift in 2024 was quite massive and seen over multiple election cycles*.
But yeah, honest to god, given how badly incumbent parties did throughout the developed world, it's really hard for me to escape the "it's the inflation stupid" angle to the election results. If anything, Trump's repulsiveness is probably why the margin wasn't much higher in Trump's favor judging on how elections went in other countries.
* one reason I liked Kevin Drum (RIP) was his commitment to backing his posts with data. Or more accurately looking at conventional wisdom and seeing how often the data didn't actually back up the CW. And he had a very good post showing that for all the pontificating how maybe Silicon Valley shifted right or black voters or young voters, the story really is about Hispanic voters. https://jabberwocking.com/kamala-harris-bombed-with-hispanic-voters-thats-the-whole-story/
I feel like it's happened multiple times where I've read a trend piece in the NYT, and later found out (through googling or scuttlebutt or whatever) that all the people profiled in the piece were in the same social circle as the author (went to school together, or friends of friends, or what have you). What a way to make a living: hear an interesting tidbit about some friends and then turn it into national news.
i think this is putting a slightly conspiratorial spin on the common phenomenon of writing about what you know. when it is somebody like noah smith doing it, then i know how many grains of salt to take that kind of article, with. when it is an unfamiliar byline in the times, maybe less so.
The weird tendency for 'independents' in news stories to end up being local RNC leadership.
Thanks, will read later.
"Christianity is making a comeback in SV" - any citations here? Because I haven't seen it so far (looking from the inside of a large SV firm). What I mostly see is people scrambling to keep their jobs.
Oh I see, it's based on a Vanity Fair vibes article.
The way some (definitely not all) people in AI approach the topic really does remind me of religious fanatics treating the rapture.
Like a group of evangelicals who denounce "the sin of empathy," many of the the "Longtermists" in particular seemed to have stripped effective altruism of all it's altruism towards their fellow man and replaced it with vague and untestable notions about AI.
I don't know where this distain for longtermists comes from. While I wouldn't identify as one I am sympathetic to the idea that we should *in addition* to caring about the current population care about people in the far future. Elon Musk is most definitely not a longtermist as he is destroying important institutions, something longtermists think about and value a lot.
I recommend checking out https://www.forethought.org/ for some thoughtful longtermist content.
"I don't know where this distain for longtermists comes from."
If this is just a way of saying, "I wish people didn't immediately associate longtermism with Sam Bankman-Fried," then I sympathize -- I wish that too.
But if you are really saying, "I don't have any hypotheses about why people might think badly of longtermism," then I'd suggest you google "Sam Bankman-Fried".
It's unfair, I agree, but it's not the least bit mysterious.
Setting aside SBF, the longtermist project is inherently unfalsifiable in general, and longtermists tend to be overconfident in spite of that fact. If we do realize the default outcome where AI is limited (by energy?) and the population decline continues, it will seem obvious that a bunch of ethicists got nerdsniped by abstract moral questions into ignoring anything with actual moral relevance.
Well, all moral theories are unfalsifiable by the is-ought distinction. If we assume that you are right about AI, even then I would think that it is a good idea for philosophers to think about how to build more resilient institutions and more stable geopolitics. Lastly, longtermists (in the EA tradition) at very high rates donate 10% of their income to charities that improve the lives of people today - that is to say they very clearly care about things of moral relavence today.
Your second point is well taken. But if you’re going to base your morality on claims about the nature of the far future then it does, in fact, matter how accurate your predictions are in a way that isn’t relevant for moral systems that revolve around sun worship.
The only claim about the far future that longtermists base anything on is the claim that there could well be far more people in the future than in the present. They definitely don’t base things on the claim that there *will* be far more people, because one of their major causes is preventing human extinction.
I'm not talking about disdain for the far future. I think that it is worth considering, including AI risk, even if many of the claims are brought up in untestable and mystical terms. I'm talking about people like Elon who claims that he's on a mission to help mankind's long-term survival and that that mission is served by taking aid from needy people today.
Maybe he started off justifying it to himself as a necessary tradeoff between general welfare today and long term survival. But if you read his correspondence with government agencies over the last two months it is ABUNDANTLY clear he is enjoying the cruelty of it, and thrilled to be taking out hordes of "NPC"s.
Yes, as I agree that Elon would rather let the world perish than not be crowned the hero, even if he did not help or even is the one destroying the world.
As the old Greek proverb goes, "a society grows great when old men plant trees whose shade they know they shall never sit in, but first they have to burn down every single tree currently in existence to make things nice and clean for the new trees."
I wonder if it’s partly a media effect where the AI risk stuff just attracts more eyeballs and so gets more play while all the EA effort aimed at current issues is considered too boring for a mainstream readership. It also just seems like a cooler thing to be involved in that trying to make incremental progress in animal welfare.
Well that, and the convergence several years ago between the initial England and Oxford-based EA cohort with the zesty San Francisco tech set, who filled their impressionable English minds with blabber about transhumanism through AGI and colonizing distant galaxies. Probably also why the only rational, simple, and straightforward approach to AI risk - a research moratorium - wasn't even seriously considered.
I mean, we're all going to die, so longtermism is inherently unattractive to anyone who only cares about himself.
Longtermism is just a thinly veiled excuse to make people suffer today for its own sake or to make other people rich.
I think the issue is that while lots of people talk about the importance of the wellbeing of future generations, most of them only do so in the context of environmentalists yelling at people for consuming too much. Yelling at people for their consumption habits is a standard part of politics that people understand. What longtermists do is go beyond yelling and instead actually think about how to benefit future people. That feels weird to many people. It's not as fun as yelling at people who use too much plastic.
I mean, the case that it isn’t in fact a fantasy is actually rather strong. AGI / ASI is fundamentally different and frankly more dangerous technology than anything that has come before, the consensus opinion among the most informed persons in the field is that we’re 2-5 years away (probably on the lower end) for AGI. The alignment problem is not solved (if even soluble) and that essentially means human extinction by default as a result of the creation of a species smarter and more capable than humans whose goals benefit from the consumption of resources that humans also benefit from (and that capitalist incentives dictate be given as much power and control as possible because why would you ever want a less-capable human who needs to sleep and who sometimes gets sick in charge of things instead of a more-capable AGI? It’d be a breach of fiduciary duty to allow it.)
There really is a sense in which relative to AGI / ASI every other issue is rearranging deck chairs on the Titanic and *at a minimum* it behooves people to take that prospect seriously if only as a matter of expected value (probability * magnitude of harm) even if for some reason they believe contrary to expert consensus that the probability itself is low. Blithe dismissal of the whole prospect seems like a commitment to willful ignorance.
The consensus opinion among researchers not heavily invested is that we are NOT close to getting AGI and are unlikely to get there using LLMs.
I personally think people should read more Gary Marcus and less Vinge when trying to understand our current situation.
Definitely not the *consensus* opinion! I work in academia on areas just adjacent to ML and AI. Some people do remain skeptical of AGI, but many of us worry a lot. (A few are giddily optimistic.)
I’m no expert, but my outside view so far has been terribly confusing.
What do people in your community make of the following combination of facts?
1. AI will often solve objectively hard math problems (e.g., “find the homotopy group of this weird 9-dimensional manifold”)
2. It often makes *atrocious* mistakes on simple problems. (Just yesterday I was yelling at it for several consecutive messages because it couldn’t take the first-order condition in an extremely simple Hamilton-Jacobi-Bellman equation. It was extremely resistant to correction.)
I find that on anything even remotely open-ended, it generally performs poorly.
For a quick response, I would say that simply the fact that we're having a conversation about to what extent AI is good at graduate level math should undermine the confidence in any prediction that "true" AGI is impossible. Turing test. Self-driving cars. Medical diagnostics. Humor. Etc. So many advances in a very short time.
Beyond that, what gives me pause is that in this field, the people with more expertise tend to be *more* concerned about truly dramatic evolution than those with less expertise.
This is what people call the “jagged frontier” of artificial intelligence. It’s just a far more extreme version of the phenomenon of the absent-minded professor, or the fact that dogs can be so good at understanding and manipulating people but still can’t learn that thunder won’t hurt them.
I don’t believe there is a single thing called “general intelligence” - there are many different kinds of intelligence, and any sufficiently alien being that has a lot of them is likely to have a very different mix than you, and thus will look eerily amazing at some things and dumb as a rock at others.
As one can see by observing the career of Donald Trump, you don’t have to be good at all kinds of intelligence to be dangerous.
This is interesting -- I wasn't aware that this had a name.
But going back to the original point way up in the thread, I feel like I'm frequently told AI researchers/Polymarket/whatever are in total consensus that "AGI" is coming soon. It seems like you're saying the term isn't even well-defined (which I agree with).
What is the “it” in question? Are you using the newest reasoning models (o3, sonnet 3.7, etc)? The progress really is astonishing, albeit not always mistake free (just like, you know, human experts)
Yes -- both Sonnet and o3 have proved utterly idiotic on a range of problems. I'd say that I see idiocy far more often than I see brilliance from these models.
Are the "AI" in (1) and (2) the same tool?
My impression is that there are specialized agents capable of super-human performance in narrowly-defined domains. And that's great until you step beyond the bounds of that domain.
But "How easy is it to expand the bounds of these domains?" and "How easy is it to create new agents with super-human performance in new domains?" are not obviously answered. To the extent that answers are forthcoming, for both questions it seems to be, "Not very easy".
The recent history of large language models shows that it’s actually surprisingly easy to expand the bounds of these domains, though hard to control or predict which particular directions these bounds will expand. Transformers were designed to help with translation, and then they discovered they could do sentence completion, and then when the tried to teach it to do sentence completion it was suddenly able to translate more languages, and write computer code, and answer questions. Trying to get it to answer questions more accurately led to improvements in mathematical ability and human manipulation skills, and new capabilities keep appearing, but not necessarily the ones people are aiming towards.
Can you be more specific about 1?
I.e. what manifold, what homotopy group and which AI model?
In the examples I've tried, it often can't figure out the number of components of a curve in the plane...
Oh, 1 was kind of a joke, sorry. I just meant o3's ability to solve frontier math problems (and the ones I've seen looked about as ridiculous as my joke example).
76% don’t believe scaling of current approaches will result in AGI per https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf
Page 63.
I was thinking less Vinge and more Hinton / Bengio / Aschenbrenner / Altman / Amodei.
As to Marcus, I honestly don't understand his point of view. The fact that he is willing to make falsifiable predictions like this is epistemically virtuous and admirable (https://x.com/GaryMarcus/status/1873766399618785646) but also other than 9 (and I guess 1 kinda, to the extent it's compute / context-length bound) these all seem like things that are basically within the capacities of current models with sufficient scaffolding, let alone by the end of 2027. How is (2) not just "longer context window version of thing that SoTA models are obviously very good at."?
I disagree completely. Hallucinations are still a huge problem. AI is good at code assist, but it's not good at end-to-end process without direct human expertise. What "Oscar-caliber screen plays" or "Pulitzer-caliber books" has an AI written? I'm floored you think current models can do these things.
As they exist today, they're tools that humans can use for some tasks. But they still make lots of shit up that you won't notice if you aren't an expert, and while they're good at mimicking the form of great writing, they're still quite bad.
"What "Oscar-caliber screen plays" or "Pulitzer-caliber books" has an AI written?"
Those actually seem like very easily achievable goals for generative AI even as it stands. 90% of new screenplays written are just reassembling parts from previously written fiction -- how many versions of Jane Austen novels have been filmed? Do we have any evidence that Tarantino is not just filming AI-synthesized remixes of previous films? True, if we set our AIs to generating screen plays, then most of them will be bad. But most screen plays written by humans are bad. And if the AI generates thousands or millions of them, then some of them will look like good ones.
So I think those are pretty low bars.
But how about the fact that I recently asked Google's AI to give me a five-letter word containing B, F, and G, and it came back and told me that "buggy" contains the letters B, F, and G. That's not just stupid, it is completely indifferent to the truth. It simply dgaf about whether what it spits out is true or not.
Oh, that's just because LLMs don't "think" in letters, they "think" in tokens. If you think this is an unknown fundamental limitation, that's incorrect.
Even so, just ask it to write some Python code to find your word and it will.
If you look at surveys it’s safe to say most AI researchers are skeptical we’re on the cusp of AGI https://www.newscientist.com/article/2471759-ai-scientists-are-sceptical-that-modern-models-will-lead-to-agi/
Are these academic researchers? There seems to be a bit of sour grapes as academia wasn’t as involved in this as they would have liked.
Pretty much everybody who either actually has to use the darn stuff in an actual business environment or is studying it, and doesn’t think they are going to become a billionaire doing it.
Terrific and useful tools are here and getting better. AGI, not so much.
Marcus IMHO does the clearest explanations.
Gary Marcus is deeply committed to a Chomskyan paradigm that I think has been falsified about as effectively as any paradigm can be. I think it’s more useful to look at the critiques that Hubert Dreyfus was giving of AI in the 1970s and 1980s, which he thought were critiques of the possibility of AI at all, but turned out to be critiques of the Chomskyan symbolic paradigm, with neural nets getting things much better.
I would describe Marcus’ critiques as more on what LLMs are currently capable of, if there are better paths, and what those paths are likely to be. He is IMHO not arguing AGI is impossible, but rather criticizing the current approach as a dead end. I personally think his critique differs from Chomsky.
When was your last chat with GPT 4o?
4o has that new image generation feature. I uploaded a picture of myself to mess around with and 4o told me I look "fantastic" and complimented my Betsey Johnson boot. So I think AI is great.
Their conflict of interest means we shouldn’t trust them, you say…
It affects your judgement when you really want something to be true.
Or if you really want it to not be true because it’s embarrassing that you missed it.
Or if you’ve decided it will make you rich, and you’re terrified someone else might get rich instead of you don’t go all-in.
Gary Marcus is a terrible author on this. He’s like the opposite extreme and appeals to people who desperately want this to not be happening.
Marcus appeals to people who see the limitations on LLMs, and think both that they are useful tools and that they have important limitations.
He also thinks that AGI is not likely to be reached via LLMs, but rather by different approaches, which BTW is the majority position of researchers.
Just looked at the `Marcus on AI' substack. First impression is that it reminds me of the string theory critics that became prominent a decade or so ago. (Him quoting Hossenfelder helped).
You can tell that AGI is bullshit by the sheer amount of evangelism (often likely paid evangelism) on the subject, including disgustingly often by Matt and people like Nate Silver. If it were really transformative and really 2-5 years away this would just be obvious and not something people would need to be "sold" on like cryptocurrency, another Silicon Valley scam that most of the same people are falling for.
I think “general intelligence” is a red herring. The long history of IQ should help make this clear. But even if there is no single thing that is intelligence, it should be clear that any person or being that has lots of some sorts of intelligence is worth paying attention to, no matter how dumb it is in other ways. Just look at Elon Musk for an illustration of that.
Are humans more generally intelligent than worms?
I define intelligence as the ability to take in information and process it in ways that enable effective action towards some goal. There are a lot of types of information that humans are able to process effectively that worms aren’t. But there are some forms of information that worms can process that humans can’t. There are a lot of types of goals that humans can’t act effectively towards that worms can’t. But there are some goals that worms can work effectively towards that humans can’t. In this particular case, the advantages of one are so much broader and bigger than the advantages for the other that on basically any reasonable way of attempting to collapse this into a linear scale, the humans come out on top. But that doesn’t mean that the scale actually is linear, or that every comparison is clear cut.
It’s easy to see that Walmart is a bigger business than Peet’s Coffee, even though there are some measures on which Peet’s might win. But I think if you try to compare Walmart to Apple or Saudi Aramco or Tesla, it becomes a lot harder - those three all have much higher market capitalization than Walmart, but Walmart has a lot more employees, and more physical presence in more of the world, and a lot more customers. Walmart is clearly smaller than Amazon, but Amazon also isn’t clearly bigger than these others either. There’s a lot of incomparability of size of corporations, and I think there’s also a lot of incomparability of intelligence, even if some comparisons are clear.
I agree... LLM is nothing but a simulation of intelligence. There is no... intelligence behind it... basically just a giant calculator of language.
What is the difference between “simulation of intelligence” and “intelligence”? I think that believing there is an important difference here is what leads so many smart people to feel imposter syndrome, where they think everyone else around them is actually intelligent while they are just faking it. It turns out that faking it effectively is all there is to intelligence.
Difference between sapience and sentience, right? I have not followed these debates closely, and this has probably been addressed, but you can easily write a program that protects its own existence by preventing you from deleting it. But we wouldn't say that the program fears death, or experiences pain upon deletion, even though it's superficially behaving the same way an animal would.
I think of “sapience” as the ability to manipulate cognitive information, and “sentience” as the ability to have awareness of one’s environment (and perhaps of oneself as part of the environment). When Bentham says “it’s not can they think, but can they suffer”, he is getting at this distinction.
But I don’t think this is quite the same as the claimed distinction between simulated intelligence and real intelligence. Turing argues (flat-footedly) for the behaviorist kind of idea I suggested - if you simulate intelligence well enough, that just is intelligence.
I think that the program that acts to prevent deletion becomes more and more like a being that has fear of death as its actions to prevent deletion become more and more potentially widespread and long-ranging. People don’t translate every awareness of risk into action to mitigate that risk, or even trade that risk off against other goals - but we do a lot more thinking about long-term risks (like cancer and fascism) than many other animals do.
Unlike us.
"...it behooves people to take that prospect seriously if only as a matter of expected value (probability * magnitude of harm) even if for some reason they believe contrary to expert consensus that the probability itself is low."
Thanks, yes, I am familiar with Pascal's Wager.
DT, I normally like your posts, but Invoking Pascal’s wager in a world where Claude, Deepseek, 4o/ChatGpt, Gemini, and DeepSeek exist and are making astounding strides far beyond what anyone thought possible four years ago is like invoking Pascal’s wager in a world where Jesus is turning water into wine right in front of you.
Right, and the appropriate response to seeing someone turn a clear liquid into a red liquid in front of you is not to say “this man is clearly the son of the Jewish God, who exists and created the world in seven days a couple thousand years ago”. Even if a bunch of people who hang out with this man all think so.
Presumably the correct response would be to check if it actually was wine. I thought this was an article *in favor* of the scientific method?
Right, and if it turns out to actually be possible to transmute water to wine, that’s impressive and gives the guy credibility but I’m still skeptical about the 7,000 year old Earth thing, to strain the metaphor.
Text prediction is not general intelligence, and it certainly isn’t superintelligence. It makes them likelier, sure, but (for example) I’d still wager on humanity being destroyed by politics rather than a rogue AI.
You wrote, "even if for some reason they believe...the probability itself is low."
So, you were asking, "how should someone proceed if they think that the probability is low and they think the magnitude is extremely high?"
I then commented that I recognized this as the set-up for Pascal's wager: what should a maximizer of expected value do if they believe there is a very low probability of a very bad outcome.
I did not say that I believe the probability is low; that was the hypothetical that you introduced.
You then rejoined the conversation to assure me that you do not think the probability is low. Okay, I'll keep that in mind.
I took your point regarding Pascal's Wager to be dismissing the entire prospect of this being relevant (consistent with your top level comment). It's usually invoked in the context of implying that a given probability is negligible such that, being incapable of being distinguished from background noise, it should be ignored because the set of elaborate circumstances required for the alleged harm to come about doesn't deserve to be privileged over various other (likely incompatible) alternative prospects. But this isn't what we have here: just as Jesus turning water into wine suggests that all of a sudden the correct interpretation of Christian theology *is* privileged within probability space, progress in AI and expert opinion regarding AGI imminence is far above background evidence that this is a threat to be taken seriously in view of the obvious harms (and *demonstrable examples* of the kind of alignment failures predicted by people concerned with AI alignment) implied thereby.
Pascal's Wager isn't actually a moot proposition because it's conceptually wrong to care about expected utility, it's a a moot proposition because the set of "things that result in infinite utility loss" is basically infinite. Believing in God / Yahweh / Adonai / El as the one true God is incompatible with various other religious traditions (actual or hypothetical) and the various testimonies thereof don't necessarily move the needle far if all of your other experience is incompatible with the supernatural. But if Jesus shows up in front of you and turns water to wine, that should in fact affect your priors about which possible religious worlds to pay attention to.
The "G" means "general," right? AI has evolved spectacularly in just a few years. But how would those specific awesome capabilities ever become "general"? I'm not even sure what that would mean. Would Gemini be able to solve the hardest math problems *and* be an excellent therapist *and* interpret radiology scans *and* process legal documents? Or more simply, would we ever have a single entity that is great at both chess and Go?
Plus the thousands of others things a typical human can do, even if not as well in each case as a single top of the line AI program.
So I still don't know what the "general" in AGI means.
As someone who is quite worried about this the appeal to labs timelines + talk about loss of control is anti-convincing to smart non-experts. The misuse risks are concrete, imo more scary, and will arise sooner!
I appreciate the intervention although I have to disagree re: more scary. Misuse is a concrete risk, and should be taken very seriously, but it’s fundamentally different in kind and *probably* not literally an X-risk. At least no one stands to make money off engineered plagues (unless you load up on puts, I guess, but that seems like a worse strategy than more productive forms of model exploitation.)
Exactly this. Even if [probability] is low, [magnitude of harm] is extremely large.
And personally, I do not think that the probability is that low. It depends on what you mean by AGI, of course. Perhaps what will emerge will remain less capable than humans in certain regards. But it will obviously be much more capable in others.
I always find it strange when folks day this and don't seem to realize it's an argument for banning AI research and an extensive...well, war against any nation which refuses to do so. Like, if you take Sam Altman seriously, it sort of leads to the conclusion that he should be in jail. At best.
Butlerian Jihad when?
Yesterday
Uh, there’s a reason that most of the people at these labs have called for government restrictions on the research, and a good number of them did call for a ban. But just like with nuclear weapons research, the people who think stopping all of it would be a good thing also think that the worst thing would be allowing it to proceed unfettered in lawless places while stopping all the law-governed research.
Yeah, you have to stop both. Which would mean war with China, at least, but I’m not completely against that.
Wait, who doesn't realize it's an argument for these things? In broad strokes I would agree with that (there are a lot of epicycles), although I think international coordination is obviously preferable to war.
There are several steps between the creation of a digital process that can do anything a person can and a radical transformation of society. They need to be massively rolled out at a favorable price without being limited by something like energy or training data. ASI is riskier because there are unknowns around its capabilities but human geniuses certainly haven’t been more dangerous than idiots, historically.
Price and energy seem like currently-solved problems - relative to the cost of a human to perform tasks within present model capabilities the price of tokens / query responses is indistinguishable from zero.
I'm generally not sanguine on training data limitations being meaningful simply because it seems like no one's training on realtime visual-motor-sensory input (which is de facto unlimited) all that hard nowadays (presumably Boston Dynamics and I guess Tesla / Waymo would be at the forefront here in terms of people with use cases for that), although it seems that NVidia is trying to make a very efficient world-physics simulation to aid in robotics development that I have no ex ante reason not to expect to work.
Would real world motion data improve alg topology proofs? I thought the reason to finish data issues was “we’ll solve it with RL”
I would guess not alg topology proofs but I also think that alg topology proofs may just be a "throw enough scaffolding and compute and tries at it" situation (although I could be persuaded otherwise).
Unfortunately I also think that "AI has limited Type-2 reasoning capabilities in certain deep domains" doesn't really reflect a substantive mitigation of risk. You don't need to understand parabolas or algebra to be really good at throwing a ball accurately, and I think that "sufficiently good statistical correlation plus the implicit abstraction afforded by large numbers of hidden layers" is likely to be more than sufficient to pose all real-world relevant dangers of machine cognition. Ed.: particularly in view of all the realized examples of misalignment risk being shown in current model research.
Compute in what form? Certainly running current models for longer is completely hopeless here, that’s what I’m asking. Ofc I agree that alg top is completely useless but biology skills aren’t, and I suspect there’s a similar question there
I mean deployment is going to be limited by something eventually. Price and energy seem cheap now because they’re only using less than 1% of total energy and have limited applications. And just because nobody’s training on real world doesn’t mean it will happen quickly when they do for all problems. It’s easy to imagine a world with self-driving and -flying drones but very few things that interact with people socially, for example.
This seems relevant and timely:
https://www.theintrinsicperspective.com/p/welcome-to-the-semantic-apocalypse
I hope the solution is as simple as unplugging things and then turning them back on. (Now I need to run a Paranoia game with this goal in mind.)
> the consensus opinion among the most informed persons in the field
There's a lot of stuff smuggled in here.
They even reinvented Hell from first principles: "Roko's Basilisk" says that any person who doesn't help the future godlike AI come into being will be resurrected as a simulation and tortured for all eternity.
It's weird that we're having this discussion. Even if AGI does not arrive, LLMs are going to upend society in profound ways. Like... literally this last week, OpenAI probably just ended most of photography as a profession. Yes, art will still exist, and we'll still need some photographers to cover events, but play around with ChatGPT Plus's image generation for one hour, and then try to convince me, with a straight face, that anyone will ever pay for stock photography again. And that was a random Tuesday at OpenAI.
We are already staring at the death of many professions (pretty much any profession that involves data entry, munging/transforming of data, is already dead, even if the realization has not hit yet), and the LLMs that we have right now are much worse than anything we will have in a year.
We should be freaking out about this. It's fine to argue about which actions make sense, but it is silly to dismiss this as some crankpot religious belief.
It really is inhuman.
Check out the Zizians. 'The only thing that matters is the singularity' thinking leads to suicide, mental illness, and murder. It illuminates Muskism.
"Check out the Zizians."
I'd rather not, thanks. As someone once said, "The cults you will have always with you."
But the cult-leaders are not usually in the position of acting president.
If you think that a crazy cult forming around an idea discredits that idea, then all ideas are discredited.
Seriously, imagine if people treated a more mainstream idea like this. Imagine if, upon being told that a politician was a Christian that they'd been discredited by David Koresh. Imagine hearing that a politician admires the democratic ideals of the Founding Fathers and arguing that Robespierre discredited them.
I think the mental illness predated the belief in the singularit.
The fuck even IS "singularity," I ignore technophiles at all costs.
"...I ignore technophiles at all costs."
As I understand it, they are people who enjoy a kind of electronic dance music.
feels weird to ignore technology.
Basically, imagine you are a blacksmith during the invention of the automobile. Demand is about to fall off real quick for horseshoes. Now, imagine that almost all of us are blacksmiths.
Now imagine the automobile is really just a Segway scooter.
It's the idea that when we invent AIs that are smarter than humans, they will be able to solve so many problems and do so many amazing things that whatever society is like afterward will basically be unimaginable. For this reason, making sure humans will be able to control and get along with such entities is vastly more important than all other problems.
What if we pull the plug on their backup power unit?
"when" lmao
That's definitely a very hard "if," chief.
Why is it an "if?" Technological progress in computer science has been moving along pretty steadily. Even if it slows down, someone will do it eventually, unless you think it's literally impossible. Do you think that there is some law of the universe that prevents anything from being smarter than Einstein?
Seriously, we have a good 5 billion years till the sun explodes. You think no one in all that time will figure it out?
I think that one way the current political discussion is impoverished is that few political writers really appreciate the dynamics of faith as a mode of thought in political action.
What is fantasy about it? You can certainly disagree that superhuman AI is imminent, I do. But do you deny that when it is invented, it will vastly change society and be extremely important?
I suppose the strongest argument against the sort of extreme singularity scenario people like Musk pitch is that maybe there won't be a strong advantage to whoever comes up with superhuman AI first. Maybe it will be possible for other people to copy it fast enough that whoever owns the first one won't be vastly more powerful than everyone else forever. That seems plausible, but it still means that it is likely that superhuman AI will significantly change the social landscape.
As an elderly US biomedical researcher, I completely agree with the core of your essay. Some "indirect costs" are probably inflated, and DEI initiatives in recent years may have distorted some elements of America's research program, but it's hard to understand the overall thrust of the current administration's cuts to science except as visceral hostility towards one of the most important contributors to health and wealth. A particularly egregious current example is the decision to eliminate NIST's Division of Atomic Spectroscopy. https://www.nist.gov/pml/quantum-measurement/atomic-spectroscopy
Perhaps one element of this seemingly mad agenda of cuts is a reaction to an unwise collective expression of partisan Democratic views among US scientists?
>Perhaps one element of this seemingly mad agenda of cuts is a reaction to an unwise collective expression of partisan Democratic views among US scientists?
I think there is a very important distinction between scientists voting for Democrats and scientists who claim to speak for the scientific community making nonscientific claims and getting caught manipulating papers in the name of social justice.
Scientists tend to vote Democratic because Republicans tend to be anti-science. When your university has to build separate facilities to perform stem cell research because the Republicans pass legislation banning federal funding for it and enforces it down to overhead costs paying for electricity that powers lights that illuminate labs that are doing otherwise state-funded stem cell research... well, it's not likely to encourage you to vote Republican and there are functionally only two parties.
Partisan opposition to a political party that opposes your profession is defensible. Claiming that the efficacy of masking and social distancing in church is different than an anti-racism protest is an open invitation for the soup-to-nuts politicization of science. Science can be non-partisan even if scientists are not, as long as it remains objective and internally consistent. But a handful of selfish idiots have ruined it for all of us by mugging for likes of social media.
The two professions who were hurt the most by social media incentivizing them to act like performative fools in public were academia (including science) and journalism.
I'm a Democrat, but I'm not convinced that Democrats are pro-science and Republicans are anti-science (I'm not even sure what those terms mean). A variety of opinions wildly at odds with contemporary scientific thought are held preferentially by people who identify as Republicans (e.g. disbelief in evolution; rejection of CO2-based climate change), but others are held predominantly by Democrats (e.g., GMO's will kill us; sex has no biological basis). I'm inclined to accept your explanation for why scientists tend to vote Democratic, but I suspect cultural factors are more salient. For example, scientists are highly-educated and the contemporary Democratic party is culturally more attractive to well-educated Americans. I completely agree with your last paragraph.
The Republican anti-science views you cite are much closer to the center of the Republican party than the (more fringe) "Democratic" anti-science views are to the center of Democratic party.
Would the Democratic party ever have put RFK Jr as Secretary of HHS?
They made a trans woman, Rachel Levine, the Admiral of the United States Public Health Service Commissioned Corps, where Levine put effort into quashing and binning studies which demonstrated negative outcomes from trans healthcare, and pressured WPATH to remove age guidelines for trans care from their standards of care.
Ah, an *assistant* secretary of HHS. Yes, yes, I see it now. Both sides are completely equal in their guilt. And thanks for pointing out that she's trans because that's the important thing.
It is important because Levine is obviously ideologically self interested. If an evangelical was quashing research into stem cells, you would think it was very relevant to note their personal details.
The Levine story has some legs:
https://www.bmj.com/content/bmj/387/bmj.q2227.full.pdf
Certainly not. Mr. Trump has seemingly gone out of his way to select senior officials who seem outrageous (though in many cases their reality is less-so). You may be correct about fringe vs center, but perhaps this simply reflects the imbalance of education between Republicans and Democrats.
On the GMO front, that was very specifically the RFK Jr. part of the left. Things like Natural News have apparently been trending more rightwing in recent years. Meanwhile, USAID people were often pro-GMO.
The thing that gets me is that the incorrect Democratic beliefs you list are kind of insignificant compared to CO2 emissions. (And denying evolution can have some uniquely terrible societal effects, but so can believing in it, so. It's a level of harmfulness below CO2, anyway.)
Given that my liberal nature inherently trends towards self-doubt, I always wondered if I was missing something about politics - after all, their side feels the same about mine as I do about theirs, right? But climate change was a north star. "Oh yeah - if they can deny that then they are clearly the wrong side." And maybe it's not appropriate to generalize from one issue like that but we are talking about a subjective evaluation. We do our best.
Since I gave you some pushback... don't get me wrong. The anti-GMO shit pisses me off. It's like anti-nuclear. Our species came up with a god damn miracle to create once-unimaginable abundance, and you're mad about it and want to shut it down, when you can't even point to the supposed consequence.
And it's not just aesthetic. If activists actually had success in fighting back GMOs, it would reduce global food production, and lead to actual human suffering, just like the people who have had to continue living next to coal-burning power stations because of unfounded fears of nuclear meltdowns. It's a really bad position.
As for the other example, the belief is actually that "gender has biological basis;" it's kind of impossible to claim that sex has no biological basis. (Although I'm sure there are some particularly ignorant people who don't understand the terms in question who do.)
As it happens I think it's also pretty indefensible when you get down to it - gender seems, in a large majority of cases, downstream from sex. You're right that some gender studies massage the truth a little bit to support what is a moral belief (and one that I share) that trans people should be treated with respect. But saying gender has no biological basis is at least less silly than saying "there's no biological basis in what organ your gonads turn into."
I'm not going to argue that ignoring fossil fuels and climate is a problem, because it is. My personal opinion is that the left has exaggerated the urgency of the problem, in part for political reasons, just as the responsible right has understated the problem.
I think the biggest problem facing American civil society today is tribalism. Just my opinion.
I actually agree with you that there was a lot of exaggeration of the climate threat. The problem was not so much the original "Inconvenient Truth"-era projections of what uninterrupted emissions growth would lead to, so much as the refusal to notice when the measures we took vastly improved the worst-case scenario. Rather, it was continually insisted, and is to this day, that "nothing is being done."
As a result people are still thinking of a pretty manageable +2C world like it was a +10C world that actually ignoring the problem would have created. As my grandfather used to like to say about non-corporeal problems, "all it takes is money."
Agreed on the tribalism. I think it's kind of a reversion to a historical norm, and it was the LACK of tribalism in the postwar period that was notable. And I think it can be traced directly to near-ubiquitous military service in an existential war - it's hard to care if somebody has a different accent from you when they had your back on Iwo Jima. Obviously that is not something we can or would want to create in every generation.
Now if you want my takes to REALLY get hot, let me talk about how the genuinely-substantial health risks and annoyance of smoking were blown up into an automatic death sentence and the ultimate imposition as part of a scorched-earth effort to eliminate widespread smoking. 😉 (It worked; I imagine most people would say it was worth it.)
I think contemporary panic about climate change among Democratic Progressives represents a mixture of motives, ranging from scientific ignorance, to failure to adjust extrapolations, to a sense that for anything to happen we have to exaggerate, to a sincere desire to take down the capitalists, to Russian disinformation, to a quasi-religious environmental perspective. As with most attitudes, individual motives are generally complex.
Re tribalism, there surely must be ways to introduce/promote national service that don't require war.
Re smoking, I don't like the way the risks of e.g. passive exposure were exaggerated, but that reflects that I'm a scientist and dislike manipulation of facts. The decline in smoking in the US is very impressive; I couldn't say whether it's because of a cultural shift, health fears, or taxes.
The rich irony is that scientific funding actually increases when Republicans are in power. The GOP is not anti-scientific-funding... I mean, it didn't used to be... but it wanted to dictate what was and was not appropriate for scientists to investigate, which is antithetical to science.
I think the main drivers are the tendency for Republicans to be skeptical of how taxpayer money is spent (they constantly introduce legislation to insert themselves into the grant review process) and that part of their base has very strong moral objections to specific kinds of scientific inquiry.
Part of the Democrat's base is anti-science in that they vilify acronyms like GMO and think magnets cure cancer, but they seem to lack the will to enforce those priorities on research funding. They tend to direct their ire at the private sector and toss red meat to their base by hauling Monsanto executives in front of Congress.
"The rich irony is that scientific funding actually increases when Republicans are in power."
This is a counterintuitive claim. I'm not denying it, but I'd love to see a citation.
I think it makes more sense when you consider that the departments of Defense and Agriculture are huge sources of research funding.
I haven't done a deep dive on this but it is notable that the biggest leap in government R&D spending was just under the Biden administration:
https://fred.stlouisfed.org/series/Y057RC1Q027SBEA
Which party is pro or anti-science isn’t about the stupid beliefs members hold, it’s about whether their policies will lead to more or less scientific research will be performed.
I think it’s obvious which party is actually pro science.
Historically it's not obvious, though it's clear that Mr. Trump is trying to cut government support for basic science.
Many Democrats make assumptions about support for science that may be incorrect. US scientific research has prospered to the extent that it has succeeded in remaining nonpartisan, allowing it to win bipartisan support. I find the current environment remarkably unpleasant, but I was also irritated by the pervasive Woke climate that infiltrated scientific funding during Biden's term.
If the goal is to drive a wedge between Republicans and support for science, loudly screaming about how they hate science is a good way to do it.
It's hard to remember but our most recent non-Trump Republican President was Bush, and he poured craploads into stuff, a lot of it stuff I thought was silly even at the time like hydrogen, but you can see the outlines of what he was going for and a lot of people disagreed with me so it wasn't categorically stupid. We probably learned some interesting things even if the ultimate path of a hydrogen economy was a failure.
I completely agree. And recall that the remarkably-effective development of the Covid mRNA vaccines was during Trump's first term.
I don't think that is clear at all. It is just as likely that Trump is trying to control science and academia as part of the consolidation of power within the political movement that he leads.
maybe both? TBH with Mr. Trump I never know when he's doing something subtle and tricky, and when he's just throwing his plate of food against the wall.
Twist yourself into knots all you want to avoid the reality that current Republicans are anti-science. I’m unsure of the payoff for you.
If you want to make an argument for which stances and policies promote more and better scientific research, I can see value there.
When they're closing Social Security Administration offices and Hegseth is promising a cumulative 40% cut in DoD's budget it makes it hard to argue that the savage cuts are focused just on Democrats and Democrat-supporting constituencies.
These guys are just nihilists when it comes to the government, pure and simple.
"I think there is a very important distinction between scientists voting for Democrats and scientists who claim to speak for the scientific community making nonscientific claims and getting caught manipulating papers in the name of social justice."
I think that people who talk about extreme edge cases that they can't even provide any examples for are generally trying to cause confusion and mislead people about the basic reality of a situation. People think a lot of things.
What you are alleging DOES happen but is sufficiently rare - my God, especially in hard sciences - that citing it as a major issue in the context of chainsaw-deep cuts seems like intentional obfuscation.
The most salient example is the "lab leak hypothesis". Without any evidence whatsoever, many prominent scientists---specifically those with big public profiles who speak with authority---began claiming that there was no scientific basis supporting the hypothesis that covid 19 originated from a lab.
A very high-profile paper was published (https://doi.org/10.1038/s41591-020-0820-9) stating: "we do not believe that any type of laboratory-based scenario is plausible"
And echoed by other scientists (https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30418-9/fulltext#body-ref-sbref100): "The rapid, open, and transparent sharing of data on this outbreak is now being threatened by rumours and misinformation around its origins. We stand together to strongly condemn conspiracy theories suggesting that COVID-19 does not have a natural origin."
However now that everyone is willing to speak up, it seems not so much a consensus as baseless assertions: https://usrtk.org/covid-19-origins/origin-of-sars-cov-2-gain-of-function-readings/
In that trove are quotes from the corresponding author of the definitive "proximal origins of covid" paper cited above such as “some of the features [of Covid-19] (potentially) look engineered” and that it was “inconsistent with expectations from evolutionary theory”. And in an email (obtained with a FOIA request): “I think the main thing still in my mind is that the lab escape version of this is so friggin’ likely to have happened because they were already doing this type of work and the molecular data is fully consistent with that scenario.”
The exact person expressing that level of uncertainty published a paper in Nature Medicine that reads: "The genomic features described here may explain in part the infectiousness and transmissibility of SARS-CoV-2 in humans. Although the evidence shows that SARS-CoV-2 is not a purposefully manipulated virus, it is currently impossible to prove or disprove the other theories of its origin described here. However, since we observed all notable SARS-CoV-2 features, including the optimized RBD and polybasic cleavage site, in related coronaviruses in nature, we do not believe that any type of laboratory-based scenario is plausible."
Granted, the initial impetus seemed not to be social justice, rather, to protect gain of function research, i.e., they did not want it to be blamed for covid. But it was quickly swept up in partisan politics and *many* scientists and physicians published similar papers and signed on to letters condemning the lab-leak hypothesis as racist. Nonetheless writing that "the evidence shows that SARS-CoV-2 is not a purposefully manipulated virus" in a prominent scientific journal after privately stating the exact opposite strikes me as a clear case of manipulating a paper.
Then there were the letters about how enforcing social distancing and masking rules on protestors amounted to "shutting down protests under the guise of health concerns." (https://www.cnn.com/2020/06/05/health/health-care-open-letter-protests-coronavirus-trnd/index.html) Fair enough, but the same people who railed against church-goers as recklessly spreading covid said: "However, as public health advocates, we do not condemn these gatherings as risky for COVID-19 transmission. We support them as vital to the national public health and to the threatened health specifically of Black people in the United States.... This should not be confused with a permissive stance on all gatherings, particularly protests against stay-home orders. Those actions not only oppose public health interventions, but are also rooted in white nationalism and run contrary to respect for Black lives."
I don't think this is an edge case or that I am cherry picking. I am a physical scientist and these clowns did not and do not speak for me or the scientific community writ large. Yet their invocation of science and claims of scientific evidence that did not exist *and that they directly refuted in private email conversations* are now being used as justification to destroy my career and profession.
There is now a ton of evidence that lab leak is false and covid is of natural origin.
https://www.astralcodexten.com/p/practically-a-book-review-rootclaim
Part of the sanewashing of early administration moves was to paint them as clever political positioning. Thus, going after USAID was smart because it forced Democrats to defend foreign aid which no one likes.
I think we can put the notion that these folks are clever strategists to bed. They're randomly raging through the entire federal government, destroying whatever their Sauron eye happens to land on next.
Is an eye or just an effluent sphinter?
Whenever you think something the Trump admin is doing sounds like a good idea, you can be assured it will be implemented in the most incompetent and actively idiotic way imaginable. What seems to be happening is that one of the more learned members of his team suggests a good (or at least defensible) idea, and then it is implemented in a warped and twisted way.
What the US is doing is akin to a high earning man with a big mortgage deciding to rip out the pipes from his walls to sell for scrap just to make a little more cash. It's unfathomably stupid.
The two things from Trump administrations that I have thought of as good ideas were capping the mortgage interest tax deduction, and ending the penny. I have no idea if ending the penny has actually happened, or if it was just an announcement that has never gone into force.
It's technically a thing that Congress has to implement to have teeth. The executive has the power to order the mint to produce an appropriate amount of coinage but not which coins (and bills) to make. That's Congress. In theory Trump can tell the mint that the appropriate amount is 0 but that's probably challengeable in court (like everything else). Someone would just have to file a suit arguong that in fact there is a need for pennies and the appropriate amount is not in fact 0.
Make the appropriate amount a very limited production run every year for collectors and archivists and you'd have a functional discontinuation of the penny that'll be much harder to challenge.
I think the fact is, most actual good policy ideas that could be implemented at the federal level either require an act of congress, or a lot of hard work and planning at the executive level to set up. Since the GOP, despite having a majority, does not seem to be passing any other bills because they are focused on the big tax cut, and anyone smart and capable and able to do long-term planning does not seem interested in working for the Trump administration, here we are.
I disagree with this -- if the goal is to cripple the power and influence of universities and other scientific institutions (because they're left-coded) then this is a competent and well designed set of policies.
I guess one question is whether going to college makes you more liberal, or whether being more liberal makes you likelier to go to college, i.e. which way does the causal arrow point. If the former, then reducing the size of the higher education sector can make the country more conservative and push policy to the right, but I'm not sure if the former is in fact accurate.
The thing I find odd about all this is the lack of internal opposition to Trump's oddball beliefs across the board. Is there not a single Republican Senator or Congressman that is gung-ho science supporter, does not like Russian invading Europe countries, DID understand what he was taught in Econ 101 about tariffs, understands how valuable legal immigration is.
I guess that in order to BE a Republican you have to suppress knowledge that cutting taxes that create deficits are detrimental to long term growth and not to understand that taxing or regulating negative externalities promotes it. But it's sort of amazing that the party can be so united around the anti-growth agenda, only the tax angle having the redeeming virtue of transferring income to rich people.
Chimpanzee politics? Maybe they’re waiting for their present leadership to exhibit a little more weakness before they make their move. They can’t all be on the same page, can they?
It's going to take every fiber of our being to welcome these prodigals back into the fold when they finally find that it's safe to attack MAGA publicly. The urge to tell these weak, cowardly bastards to go to hell will be so overwhelming. Dentistry will be a growth industry, due to the dental damage caused by such intense teeth gritting.
I think it is this. Eventually something will happen, the polls will just tank, and Republicans will shift. We saw this with Bush 43.
They've either been retiring, getting primaried, or keeping their mouths shut. Among regular voters, a lot of these people are now just Democrats.
No, they’re independents who may be forced to vote for Democrats because that’s the only alternative to insanity—but it doesn’t make them Democrats.
I think a large swath of the country believes that Liz Cheney is a Democrat.
Not just a Democrat but a left wing radical. She's literally "comrade Cheney" in some far right channels
Also now there's the threat of not just a Trump endorsed primary but a Trump endorsed, Elon funded primary challenge. THAT has real teeth for some politicians. The game theory suggests the individual best thing to do is shut up unless you can be sure you're not the only one publicly in defiance.
From their perspective, they see it as their version of the "ethic of responsibility" or the "slow boring of hard boards." Opposing the admin will just get you kicked out of politics and replaced by someone who genuinely shares their anti-science beliefs, which doesn't help anyone. Better to just try to influence and moderate the administration privately and from within.
It's a rational thought process, but it also goes to show that there are limits to Weberism. At some point, you do have to decide that compromising isn't worth it and that it's better to help the other side. But where that line is exactly is a judgment call.
Not a single one woud survive a primary?
I'm sure some of them would survive, but all of them - 100% - would be at high risk. The unfortunate fact is that more than one half of high-propensity GOP primary voters (which probably equates to more than 20% of the country) are fully and wholeheartedly in support of right-wing authoritarianism (though not all of them have intellectualized that belief system).
We only need a few voters from their sanity fringe. :)
I think you're discovering too why Matt keeps saying that GOP (or maybe more accurately GOP elites) only exists to cut taxes for rich people.
I am 50 years old and, all my life, it’s the one thing that absolutely every Republican administration is certain to do. It’s one of the only real accomplishments of the first Trump administration, most of the rest was just noise.
I was calling the 2017 tax cuts for the rich and deficits act the “Tax Cuts for the Rich and Deficits Act“ When Matt was still at VOX. :) (I’m not claiming plagiarism. :))
You've kind of listed my red lines where I can't make any compromise with the right. That plus relitigating the 2020 election. Take away those and I'd still probably be more on the right than the left.
I am always in the center, the meo-centric theory of politics. :)
Of course many Republican office holders hold such views and are happy to tell Chuck Schumer everything about them in the gym. But the cowards will never stick their necks out in public.
It’s going to be a hollow victory indeed if inflation jumps due to large deficits and tariffs
It’s more than the making of scientific pronouncements about political topics. If you look through grant proposals you’ll see an incredible amount of needless groupthink influenced buzzwords. The grant writers know that if they propose a study of statin use during menopause that had less of a chance of being funded than a study of statin use in menopausal lesbians of color.
It seems somewhat similar to a kid who gets his first corporate job and starts spewing corporate buzzwords because he thinks that’s how the cool kids talk.
That said I’m not really sure how it all came to be. Why did we have senior academic leaders openly using “white males” as a pejorative? How did trying to be inclusive end up excluding groups that are central to the continued funding of the organization?
Are you a scientist? There's no way the claim you made about which grant would be more likely to get funded is true. It is plausible, even likely, that discussion of how your grant will help marginalized communities will make it more likely to get funded, and even a design that added additional focus on them would probably help. But broad research is just much more significant and fundable than narrow research.
I am and it is. Grants are so competitive that you essentially need a perfect score from a review panel just to be considered for funding. If there is one person on the review panel who thinks that science has to be anti-racist or promote equity or whatever, they will spike your proposal. But the reverse is not true because very few people will spike a proposal because it has that language. Moreover, funding agencies *require* sections of proposal that all but insist the use of such language. Prior to literally last month, if you wanted to get funding from the Department of Energy to research photonic coatings to improve the efficiency of solar panels, you literally had to include a section entitled "Plan for Promoting Inclusive and Equitable Research" explaining how that research would promote diversity and equity in science.
“…if you wanted to get funding from the Department of Energy to research photonic coatings to improve the efficiency of solar panels, you literally had to include a section entitled ‘Plan for Promoting Inclusive and Equitable Research’”
That obviously shouldn’t have happened. How, then, to prevent it from happening again?
The problem here is that grant funding is a proposal-writing contest and if the board gets two proposals to research photonic coatings to improve the efficiency of solar panels, then they can't pick which one on the basis of which line of research is more likely to be fruitful (because every single expert on efficiency of phototonic coatings of solar panels is almost certainly a current or former member of one of the two research teams and therefore will definitely say "my current/former team"), so they need some other tie-breaker and that's inevitably going to be essentially a writing contest.
So if it isn't a contest on "how well can you write a 'Plan for Promoting Inclusive and Equitable Research'?", it's going to be on "how well can you write a 'Plan for Promoting Great Scientists of History in our field of research'?" or some similar anti-woke thing. Or it's going to be "who is the more prestigious university?". Or "who asked for the least money?"
In some cases, there are people who are sufficiently qualified to judge between specific proposals on a technical level (a good example here is telescope time in astronomy - there are lots of people working for the telescopes who can judge between the different proposals as to what is the most scientific gain for the least telescope time), but for a lot of research, the only people who really understand it well enough to do a meaningful comparative cost/benefit analysis are the researchers themselves and maybe one or two competing research teams.
>So if it isn't a contest on "how well can you write a 'Plan for Promoting Inclusive and Equitable Research'?", it's going to be on "how well can you write a 'Plan for Promoting Great Scientists of History in our field of research'?" or some similar anti-woke thing. Or it's going to be "who is the more prestigious university?". Or "who asked for the least money?"
Or just looking at their track-records and determining one has a higher chance for success.
Which is a great way of shutting out new people from doing research.
I’m not claiming that they have found the best possible approach - merely that there is a real problem and in many cases they’re probably going to use something that is good in itself but orthogonal to the research itself.
At that point why not just use a lottery? Set some minimum threshold the grant has to pass to qualify and then allot funding randomly.
Broadly I agree that this is a good idea, but people are reluctant to give up the idea that they can make effective distinctions (on the reviewer end) or the idea that they have ultimate agency in their own lives (on the proposer end), even when it's clear that they can't and don't.
Canada has a system that functions essentially that way. Researchers are more-or-less ensured a baseline level of funding so long as they meet certain criteria and submit coherent proposals. They then compete for funding above and beyond that in a more merit-oriented process.
also how college admissions should work
Because Americans seem to suffer from a cultural delusion that it's better and more "meritocratic" to allocate random chance via over-optimized competition than explicit lottery.
Persuade the leaders of the American scientific community that this isn't an important goal.
Seems to me a good way to convince them is to choke off all of their federal funding until the research institutions publish a policy forbidding any consideration of race or sex outside what is scientifically justifiable, and forbidding any other useless distinctions, e.g., equity.
Sure, Congress can---and does---attach all kinds of requirements to funding. But what is happening now is lawless, unaccountable, unilateral withholding of appropriated and awarded funds. It is a direct assault on the constitutional system of government and is in no way justifiable because some people don't like how some taxpayer money was being spent at some universities by some researchers.
I don't think "we're going to force the intellectuals to conform to the ruling political ideas" is either a good idea or one with a successful track record.
Similarly, for NSF proposals you have to have a Broader Impacts section (this is mostly a good thing) and until recently in computing you needed to talk explicitly about Broadening Participation in Computing. But this didn't mean that panels actually preferred research about "woke" topics over more conventional topics, just that it was a disadvantage not to have something to say about BPC.
The definition of "woke" is important. If your broader impacts discusses plans to engage K-12 students and it does not include language making it abundantly clear that the schools you work with comprise a diverse population of historically under-represented, under-served students, etc., you will hear about it from the panel.
I'm not saying that you have to write an essay about how math is racist in order to get funded. I'm observing that I have reviewed and submitted zero federal grant proposals that do not directly identify a group of people, use adjectives that convey their historical inequity with regard to science and then enumerate a plan to help mitigate that inequity.
Based on this observation I assert that you cannot, in fact, write whatever you want in a PIER plan or a Broader Impacts section. If you do not explain how your esoteric, highly specialized scientific investigation will "promote inclusive and equitable research" your proposal will not be selected for funding. I don't know how else to characterize that other than compelling the use of "woke" language.
I assert that many people in the scientific community have internalized a specific philosophy (that I am labeling woke) to the degree that they have lost sight of the basic fact that you cannot run a successful research program at an American university without familiarizing yourself with specific language and including it in your proposals.
For a bit of context: I moved to the US from Europe and the first thing I had to do was ask my American friends what a diversity statement was and how to write one.
It's also the case that different agencies handle these issues in their own ways. At NASA, the Science Mission Directorate was starting to require Inclusion Plans for some proposals. While some people were worried that this was going to be all about esoteric DEI and woke academic ideology, it was actually really basic stuff like "If you are working with grad students, are they going to have the chance to present this research at some point? How are you going to make sure all your team members can raise concerns? Are you going to recruit people by just asking your two closest friends, or will you at least try to open up the search a bit?"
Sure it might be a bit of micromanaging, but what aspect of proposing for federal grants isn't like that? The basic goal was to get PIs to think a bit about how to involve people and have a functional research team. Nothing about race, gender, etc was required, and in the several reviews I participated in, none of the proposers were dinged if they just focused on team activities and didn't address the DE part of DEI.
I talked to both proposers and reviewers from the more conservative, DEI-skeptical side of the scientific community and outside of a single early review that was handled badly, they came away comfortable with what was happening.
Just like all areas of political culture, the worst excesses of any larger effort are always the poster children for the whole thing. In reality, the bulk of the people in large bureaucracies are just trying to do a good job to meet the agency goals, whether it is health, science, technology, etc.
“…in the several reviews I participated in, none of the proposers were dinged if they just focused on team activities and didn't address the DE part of DEI”
Did any proposals get dinged if they did include DEI?
Right I agree with basically all of that, although we probably disagree somewhat about the merits of it. My point is that you can say "I'm going to research the impact of metal homeostasis on viruses, and I'm also going to give some talks in the most disadvantaged local elementary school" and that's more likely to get funded than "in going to do research on how viruses are mostly bad for minorities".
More broadly, science funding in the US is in many ways best characterized as a proposal writing contest, and this is an aspect of that more than a change in what's funded.
I am not defending the ongoing assault on academic science and I'm not saying that we deserve what is happening. I also doubt we disagree on the merits. I am a huge proponent of outreach to under-served communities. Having benefited from it myself, I organize and engage in it as much as possible.
If we disagree at all, it is on the distinction between the technical merits of a proposal and the broader impacts. I don't think you can separate them and I do think that proposals that decorate their abstracts with woke language are more likely to be funded.
I don't oppose any of the activities and I think it's a good thing that federal funding encourages broader impacts, etc. If I sound exasperated it is because I feel that the funding agencies have been colonized by an activist mindset that thinks it is ok to rob us of our agency to determine how we want to contribute to the broader community. Having to type out the words "Plan for Promoting Inclusive and Equitable Research" and then populate it with a bunch of stuff I don't believe is offensive to my liberal principles.
What area of science are you in? I'm in biomed/phys/chem and that's not my experience at all. But I appreciate that DEI has been adopted more heavily in some communities than others (my impression is that astro is pretty into it, for example).
The materials version of your field---but it doesn't matter because it just takes one reviewer who is really into "all of that stuff" to spike your whole proposal because they don't think your Broader Impacts is up to their standards.
> Broader Impacts section (this is mostly a good thing)
I'm only n of 1, but I give zero weight to anything DEI in the broader impacts. But like you say, panels that I've been on are still judging proposals almost entirely by the science.
Again, I'll point to the specific *programs* created as being the source of the most DEI-heavy grants. Here's yet another one I easily found in a quick search: https://www.nsf.gov/funding/opportunities/workplace-equity-persons-disabilities-stem-stem-education/nsf23-593/solicitation
Indeed, there have been a bunch of programs funded to explicitly attempt to address inequity of various kinds. Obviously one can debate whether those things are good to address, or whether the funded programs are effective, but that's very different from suggesting that everything funded by the NIH today is "woke".
Yeah, those plans were really annoying. I would say there's a difference though between a proposal in which that inclusion/equity is core to the research (a few) and one in which it's an add-on (the majority). And I can tell you that for the majority, a lot of scientists have been using chatGPT to help put them together.
So again, I want to distinguish between blaming scientists for being too "woke" (not unheard of to be clear) and blaming us for following the incentives created by politicians and the administrators who were hired to implement their directives.
I agree that we mostly just accept the constraints of the proposal writing competition and evaluate the core ideas being proposed and the track-records of the people proposing them. If they made us write grant proposals in crayon, we'd do it. I resent the abuse of those incentives, whether they be making every grant have the words Quantum and the acronym AI in them or narrowing what counts as broader impacts to specific social justice outcomes. And now I'm hopping mad about it because it's created a paper trail of abstracts with ridiculous woke language for completely unrelated research and we're being beating over the head with it.
There is absolutely nothing wrong with funding research with social justice components or that are primarily aimed at DEI goals. Nothing. It's the indefensible absurdity of forcing it on everyone else that is wrong. Not the goals, mind, you, specifically the language. A proposal about developing materials for quantum computing that mentions racial bias reads---correctly---as Orwellian and discredits the entire enterprise.
“ It is plausible, even likely, that discussion of how your grant will help marginalized communities will make it more likely to get funded, and even a design that added additional focus on them would probably help.”
It doesn’t sound like you’re disagreeing with what I said.
No, actually, what research you do, and what you say about the benefits of that research in your grant proposal are two very different things. Also, broad research is more likely to get funded if it also attends to the interests of narrower communities is very different from broad research is less likely to get funded than narrow research.
Right the goal was the statin research but there is/was a need for some woke spin to goose the likelihood of grant funding.
Aligning your work with the priorities of your funders is a game grant writers have played since time immemorial. If you want research aligned with your new and different values, state them clearly and do a good job of analyzing the proposals.
This isn't remotely close to what they're doing.
The subtle, but important, distinction is that the woke spin is not part of the *technical proposal* where you write inscrutable jargon only other experts in your field understand.
The woke spin is confined to the sections where you are obliged to describe how awarding you this grant will lead to "broader impacts". Although it can contain boilerplate stuff about writing papers, giving talks, organizing symposia, etc. it should also explain how you will engage with the broader community and whatnot. *That* part is where the woke language creeps in.
The insidious part is that the language of the technical proposal and broader impact are merged in (public) summaries, making it appear to the fine young asshats at DOGE that we're pouring money into scientific research that is being diverted to help teach kids with gender dysphoria how to read the Periodic Table. And the propagandists in right-wing media are all too happy to amplify this misinformation to manipulate the public into thinking its actually a good thing to cripple scientific research because it's not even real to begin with, just another part of the Big Woke Conspiracy.
I think doing scientific research that would help marginalized communities is good.
But, as Matt often points out, you could define the marginalized groups economically and it would disproportionately help minorities but without the deeply unpopular racial baggage.
Works for me.
From "It's not happening" to "It's happening, and it's a good thing" in two sentences!
He’s saying that the “help marginalized communities” part isn’t done by limiting the scope of research (to “menopausal lesbians of color” in BZC’s droll formulation), it’s done in other ways.
I understood that, Tom.
I don't endorse BZC's rhetoric, but I saw it as an exaggeration meant to make the point that the scientific community became (still is?) obsessed with identity politics to the point that research proposals adopt the language of politics rather than science. And, yes, I do see Sam's response as refuting a specific thing in BZCs comment and then agreeing with what I took as BZCs meaning.
I think there’s a difference between doing research that’s broadly useful but framing it as beneficial to certain groups in order to get funding, and doing research that really only is targeted at certain groups. You may not agree that’s an important distinction, but if you recognized Sam was making that distinction, then it was uncharitable to treat him as being inconsistent, as you did in your comment.
It's not happening, a different but related thing is happening, and that second thing is mostly good.
The lack of precision from the more dedicated anti-woke commenters (and I say this as someone who’s very woke-skeptical) is striking. I still remember the person who refused to believe that using the word “negro” in its historical context wouldn’t get you fired from academia.
"I still remember the person who refused to believe that using the word “negro” in its historical context wouldn’t get you fired from academia."
Fired - no. But I could see it impacting a tenure decision, especially if there was any objection made by students.
And I’m here in academia telling you that this is not the case. If you show me an example of it happening, that will be much more than the other guy managed to do!
All I can tell you is there are a ton of gratuitous invocations of race, gender, LGBT, discrimination, etc., in journal articles and have been for years. I think it is fair to say including or centering that stuff helps get you published.
I'd say it's a little more complicated than that (isn't it always?).
Scientists are certainly not immune from groupthink. But in terms of proposal reviews, it's more boring than you claim; I know if I reviewed the proposal you suggest, I would take points off for it being too narrow. Instead, the way proposals like that get funded is often through targeted programs rather than general calls. Here's a random example I quickly found: https://grants.nih.gov/grants/guide/pa-files/par-22-186.html
Scientists respond to incentives like anyone else. If there's specific pots of money for these things, we'll figure out how to go after them.
“ if they propose a study of statin use during menopause that had less of a chance of being funded than a study of statin use in menopausal lesbians of color”
This is just absolutely false. If you propose something and say it affects a smaller group rather than a bigger group, that isn’t going to help.
But if you add a paragraph about how your broadly beneficial research will have particular benefits for underprivileged groups, that will help it at some stages of the process.
An unfortunate quality of universities (and many large bureaucratic organizations) is how they have evolved into Rube Goldberg machines that deliver robust outcomes through inscrutable processes that are easy to criticize and hard to defend.
Imagine the mid 20th Century when the then-largest generation in US history---the Baby Boomers---were about to start graduating high school and entering the workforce. There was no way for the economy to absorb them, so a conscious decision was made to build out state university systems as a sort of flow-control. Among the many consequences of this shift were an increase in careers available to women (e.g., professor), an increase in the production of nerds (e.g., to start tech companies and work for NASA) and the codifying of the government-funded research model that grew out of World War II. Fast-forward a bit and the boomers moved through the university system, leading to dropping enrollment. Many state universities started to evolve into research-focused institutions, leading to the virtuous cycle of educating nerds and producing innovations that those nerds could implement in the private sector to create the modern world of information technology, pharmaceuticals, materials, etc. That evolution, however, came with a huge cost: a modern R1 university needs to maintain tens or hundreds of millions of dollars in infrastructure to support research in the natural sciences. Without that infrastructure, universities would not be able to recruit and retain the talent necessary to train the next generation of nerds an the whole system breaks down.
While purchasing expensive equipment typically involves special infrastructure grants, maintaining it requires a lot of people, from highly specialized technicians to highly generalized facilities workers who take care of everything from leaky pipes to sophisticated redundant power systems needed to keep sensitive equipment (e.g., that requires cryogenic conditions) from failing every time the power goes out. That is what overhead pays for: facilities and administration. It pays for, among other things, the staff needed to ensure grants are submitted properly (which is non-trivial), funds are spent correctly, equipment doesn't break down, labs maintain proper airflow, etc.
Large state universities have become self-sustaining through tuition, donations and overhead. Often they are only state universities insofar as the state owns the land underneath them. Congress used to recognize the vital role universities play in educating the workforce and so made the process of adjusting overheads slow and deliberate. Overhead caps are not fixed, but when they have changed in the past, they have done so with plenty of warning and buy-in from the universities. Mandating sudden cuts to overhead and then enforcing them by threatening employees at NIH and NSF is illegal and should be blocked by the courts, but none of that matters because the current rates of overhead from active grants are already accounted for in the budgeting processes. State legislatures can not and will not just hand over a bunch of money to make up for the shortfall and private universities can not and will not sell off a bunch of assets.
Universities across the country are already rescinding offers and freezing hiring of new faculty and admission of new graduate students. As Matt points out, the consequences of these actions will be felt later (meaning whomever is in power at that time will likely take the blame) and they will be amplified by the other mindlessly short-signed policies. If you cripple university funding, you lose faculty and teaching capacity, meaning they educate fewer nerds, meaning fewer people become scientists, engineers, doctors, economists, etc. In a few years, when there is a shortage of home-grown doctors, there won't be any skilled immigrants to fill in the gaps. Less healthcare will be delivered, meaning a slowdown of a huge part of the economy. As China overtakes the US in key areas of technology, there won't be enough American labs producing enough homegrown computer nerds to counteract that trend and we will spiral into technological irrelevance. Meanwhile there won't be enough nerds to measure all the wild fluctuations in the economy, leading to more economic chaos and slower growth.
None of this is easy to explain to people who are already disinclined to care and especially those who have bought into the idea that America should be more like China 30 years ago, manufacturing low-value goods and throwing its weight around regionally. It also does not help that so many scientists publicly beclowned themselves at the height of woke panic and made us vulnerable to politicization.
TL;DR the attack on science is way worse that it looks and scientists have already moved into a mindset of managed decline.
I agree with basically everything else you write, but I have not seen the "flow control" claim before. Do you have a source I can read about that. And there wasn't an enrollment drop post-boomers, instead there's been a continued expansion (with maybe some post 2008 retrenchment).
Philip Bump's book The Aftermath makes a compelling argument complete with references. He tracks their progress as they aged, pointing to massive investments that followed, like the creating of the disposable adapter industry and the rapid construction of elementary schools and then points to all the knock-on effects. And I think it is logical to assume that this pattern didn't just stop when the boomers started turning 18.
The fear that the boomers aging into the workforce would disrupt the labor market was very real and there many proposed solutions and the GI bill had already demonstrated a similar effect. While I'm not sure you can point to a single document laying out the exact plan, it is not a coincidence that so many state universities feature brutalist architecture. It is a reflection of the era in which they were rapidly built out.
Thanks, I'll look at that book.
It makes total sense as there was a great fear when the war ended about how the economy would absorb all those GIs. The GI Bill college funding provision certainly reduced the volume of men reentering the workforce.
Yes the GI Bill was definitely a response to demobilization. But I'm not sure those same concerns were there for the baby boom.
You’re right, I misread.
Many commentators (MY and in this thread) try to connect current cuts to US science to some particular political stance. In reality, authoritarians commonly cut science as a threat to their power, often coupled with pseudoscientific beliefs and reliance on native born scientists. The only truth that can be allowed comes from the mouth of the leader. Soviet Union had Lysenko-ism (destroying basic biology), the Chinese cultural revolution sent intellectuals to pig farms (or killed them). Hitler banned "Jewish Science" i.e. relativity. Of course, a few areas were allowed to progress (rocketry in Germany / USSR) but only with very specific political goals in mind. This is not that hard, people!
Imperial Germany under Wilhelm II, the Brazilian military junta and James I were all great for science, there is no absolute rule about whether authoritarians are good or bad for science.
You don't even have to go that obscure. The People's Republic of China as it currently exists is hardly a bastion of free thought and yet it is either at or racing toward the technological frontier on essentially every relevant hard science domain. I think a more coherent case, rather than 'authoritarians = bad science', is that ideology is bad for science and ideological authoritarians have more leeway to punish academics for deviations from the state ideology.
You see this today in China where research of certain areas in the social sciences is absolutely forbidden, but it just happens to be the case that those areas don't really matter. And ideology is bad for science in the US too, as we've seen, but it's a little easier to get away with being heterodox when the orthodoxy's power ends at social and professional sanction rather than imprisonment.
Authoritarians are also broadly friendlier toward applied science than pure science.
That is the theory and is probably true of theocrats but I think in Nazi Germany and the Soviet Union a preference for applied science was mainly motivated by anti-Semitism and Jews being over represented in more theoretical areas.
If video games have taught me anything it's that the Nazis loved their super-science.
Without getting into a discussion about the extent to which you can compare James I with the regimes I mentioned, the concept remains that attacks on science commonly reflects more on the ruler than on the scientific establishment.
"...the extent to which you can compare James I ...."
On the one hand, he did support the most up-to-date, cutting-edge witch-hunters.
On the other hand, he wrote a long treatise about the dangers of smoking tobacco:
"A custome lothsome to the eye, hatefull to the Nose, harmefull to the braine, dangerous to the Lungs, and in the blacke stinking fume thereof, neerest resembling the horrible Stigian smoke of the pit that is bottomelesse."
I think you mean "totalitarian", not "authoritarian". A totalizing revolutionary ideology (Nazis, Soviets, Mao) can't tolerate alternative sources of truth. A dictator can be fine with it, though.
the soviet union had an informal arrangement with their math community where if you kept your head low, you could go about your research more or less unmolested. of course, the politics within the community itself were quite vicious. (once you got past the gatekeepers, who would sometimes sink jews and other minorities in admissions)
So “the singularity” is coming, it’s going to change everything, and the people with power think the most important thing we can do about it is… fire a bunch of low level paper pushers, potentially cutting something like 0.5% of federal spending.
I’m not sure I get it. Shouldn’t the world’s richest man have higher priority things to do?
I don’t know about you but I’ve had some conversations with ChatGPT that are as freaky as fuck. There is something new under the sun.
That’s fine, and I do think AI has the potential to massively reduce the number of people needed to perform government administrative work. But I don’t see how or why progress in AI models justifies or necessitates any of the administration’s actions.
If anything, if there’s large scale private sector job loss on the horizon, I think we’re eventually going to need more government employment and/or more redistribution. We can’t all be personal trainers, aged care workers, and tiktok stars.
Personally I don’t think it has anything to do with AI. 99% of it is just Elon’s drug use, bipolar disorder, autism, being way too online and the long term effects of being the most financially successful person in the world. He’s just very high on his own supply.
Yeah, I think I agree with that. But if Musk really does think we’re on the verge of a new world, I would expect him to have some kind of theory of how his actions are connected to that new world. Even if it’s a bad theory.
He just wants to be a contender (and has probably just lost the plot in every other respect)
Yglesias hammers this a lot but he just doesn't care much about average people.
It makes sense if you think the idea is to get as many of Those People out of the government as possible for when the times comes to decide how to parcel out God's rewards.
Ummmmm... the scientific community bent over backwards to try to offer neutral statements about various findings (and indeed, there's almost a culture of shame when someone gets too big for their britches and does "science by press release"). But the Gingrich/right wing assault on science has been vast, and at some point people realized you have to fight back and not just roll over for the sake of neutrality when well funded organizations tell lies about your work. Because now we're in the endgame... fewer foreign researchers will come to the US, fewer research projects will be done, and the breakthroughs that could've happened now will not.
I’m going to disagree a bit. People who go into climate science tend to have a certain worldview. And that worldview includes a strong belief that saving the environment requires people to sacrifice and do without. But that makes the findings needlessly political.
They can and should say - the data indicates this is happening and to the best of our understanding this is how bad it could be.
And then they could list out possible mitigation strategies. Mitigation only - here are the pros and cons. Geoengineering - pros and cons. Vastly expanded nuclear - pros and cons.
But absent from that should be the aesthetic presences of the researchers. If they like to ride their bike to work, great! Don’t let that influence the research.
If this is the solution to climate change and it horrifies the researchers - just take the win and let it go:
https://www.cadillac.com/shopping/configurator/electric/2025/escalade-iq/escalade-iq/model
I knew two climate scientists grad students who were quite dismissive of the pop sci narrative and the science celebrities grifting off of apocalyptic narratives. They absolutely believed in climate change as a serious issue, but as experts in statistical modeling they also recognized that there are a lot of simplifying assumptions in every model as well as uncertainty in extrapolations to the future. Moreover, they were offended by scientists-turned-celebrity figures in the media who eliminated all rigor and nuances in their public speaking. If anything, those pop sci individuals did a poor job representing the expertise and challenges of climate science.
Yes! That’s a very good point. I don’t have the xcld comic handy but there is often a huge disconnect between what the study or report actually says and how the media reports it.
It's PhD Comics, actually: https://phdcomics.com/comics.php?f=1174
99% of the time I read takes about how members of [x] profession or field have such-and-such worldview, the take-giver doesn't actually know anyone from that profession or field, let alone a representative sample, and only knows about people who get quoted in the paper.
Bingo!
I think that the public health set is an even better example of this.
Public health officials putting out statements in favor of protests was one of the most mind boggling things I've ever heard. Even five years later I have a hard time believing that actually happened.
And not just protests, but only certain type of protests they deemed worthy! As if the virus cared about speech viewpoints.
Do you remember the political attacks on Michael Mann, the researcher on the "hockey stick" graph? That's what I'm referring to.
I'm not saying what you're describing *doesn't* happen but Matt Hagy below describes this nicely... it's a consequence of our media infrastructure amplifying the most extreme voices. There are way more reasonable scientists communicating the way you describe, but they get drowned out.
Also, most scientists get into a field because they see *a problem* that needs to be addressed. Climate change is a pretty big problem! So is cancer research, so is energy research, etc.
Finally... in pharma (where I work) there's a pretty broad diversity of political stripes. There's probably as much disdain for the far left as the far right. But the far right has way more power! So even the conservative types have ended up being very anti-Trump.
That’s another important point as well - climate scientists being conflated with climate activists. And then of course the dangers of scientists becoming activists.
There's nothing wrong with being an activist! But as you rightly point out, if your activism (a) repulses more people than it persuades and (b) is wrong on the policy merits, you're doing way more harm like than good.
I would go further and say that scientists should be activists for the same reason anyone with specialized knowledge should advocate based on that knowledge. You can make the case that the very purpose of tenure is to allow academics to be activists without fear of being fired.
The problem, as I see it, is when scientists start asserting that science is on their side. You can be wrong on the merits and advocate for stupid policies, but as soon as you invoke the "in my capacity as a scientist I assert that science supports my argument" you drag "the science" into the poisonous group dynamics of modern discourse.
It's a tough line to walk, because in many cases the science *is* clear about one side of an argument over the other (like, yes, vaccines are good and measles is bad), but other times it's way more muddled. Unfortunately it's usually the latter where people try to argue from authority rather than evidence.
When you have scientific journals and popular science journals endorsing political candidates, it makes scientists look partisan rather than scientific. Also, some science societies have put scientists on a pedestal as far as advising policy rather than explaining the science within the limits of science, and letting policymakers make policy while asking for what science can say to policy. That has led some to a view that scientists are partisan.
I think the polarization of science is overstated, including by Matt Y in this piece. Look at the Pew numbers from October 2024 -- yes scientists took a hit after covid but are still one of the most trusted professions:
https://www.pewresearch.org/science/2023/11/14/confidence-in-scientists-medical-scientists-and-other-groups-and-institutions-in-society/ps_2023-11-14_trust-in-scientists_1-01-png/
Oops meant to post this one too: https://www.pewresearch.org/science/2024/11/14/public-trust-in-scientists-and-views-on-their-role-in-policymaking/
There is a culture of shame around “science by press release”, but only because that is also a very real phenomenon that some people do engage in.
the press releases are often (well, always, as they have to be) a simplified version of the actual scientific studies, and sometimes written by press releasers rather than the scientists. Every once in a while I read both and wince.
One thing to know here is that many people in tech think that academic research in science is basically useless. This is downstream of some actually true things, like that academic AI research is not on the cutting edge of building big AI systems and that most drug discovery research doesn't work out when you try to commercialize it. That sort of thing, plus things like the replication crisis in social science, have led some people, probably including the tech executives who staff the futurist right, to skepticism broadly about funding for science, which is totally off base.
What irritates me about that attitude is the double standard: in the creative destruction of capitalism it is a good thing that people can try dumb ideas only to have their startup fail. But every scientific paper that does not describe something with immediate utility that is obvious to everyone is a huge waste of money. Never mind that your startup employees my former students.
As I understand it, Pfizer understands that (it’s constantly acquiring NIH-funded startups) but OpenAI doesn’t (it’s just hiring good software engineers). Differences between industries and companies.
More people should be told how Ozempic basically was developed thanks to weird-ass, seemingly useless research on lizard venom.
"most drug discovery research doesn't work out when you try to commercialize it"
That means you need more R&D not less. Because you don't know what's going to pan out.
I think there’s a more charitable way to put some of their concerns. Academia is set up in individualistic ways that encourage individuals to pursue their own crazy views, find out some neat things, and move on while making a name for themself. While collaboration is on the rise (particularly in biomedical science and big physics) you still put the names of the individuals on the publication, rather than doing it officially as a company or institution working to some collectively agreed upon goal. Academia is great for showing that some effect exists. It’s not great for figuring out how to use that effect in an economically viable way to make the world a better place.
More succinctly, academia is filled with scientists it's mostly not filled with engineers.
But even academic engineers are doing something different than corporate engineers. Every academic has a personal website announcing their personal research projects. Very few corporate people do.
I find it comforting that they think that and will be worried if and when they stop and say: ``hey, this one specific thing is actually very useful''
It's absolutely stunning the way the Trump administration took some reasonable critiques and ran with them allll the way off the wingnut cliff.
Yes, the Democrats/progressives went too far with "men are toxic" --> let's revel in our support for outright a-holes, misogynists, and rapists!
Yes, Democrats went overboard with DEI stuff --> HULK SMASH ANYTHING RESEMBLING DEI!
Yes, science has become politicized and full of liberals --> HULK SMASH SCIENCE!
Yes, some government spending is bloated and inefficient --> let's feed government agencies into the wood chipper and joke about it on Xitter!
Yes, illegal immigration is unpopular and Democrats should have done something about it sooner—> let’s disappear some legal immigrants to an undisclosed location in Louisiana or a sh*thole prison in El Salvador!
It's beyond painful to see.
Some people are actually concerned about folks on the other side of the political spectrum being too extreme. Other people are actually extremists themselves who see extremism on the other side of the spectrum as a positive good and an opportunity to empower themselves to run wild.
Like socialists who saw Trump's election as an opportunity to overturn the Democratic Party the way he did the GOP.
"With crisis comes opportunity."
One thing that blows my mind more than anything else about the Trump administration is the alienating of potential allies, both foreign and domestic. My wife's cousin genetically modifies wheat. Previously, his political enemies have been the anti-GMO left. A few weeks ago, he posted something on Facebook about a bunch of people he worked with at the USDA getting fired out of nowhere and how that's going to make his job more difficult. I'm definitely open to the idea that there's spending to be cut at the USDA, and I definitely don't think "give farmers whatever they want" is good policy. But it's crazy to me the way they're missing lay-ups and alienating one of the few branches of science and academia that should be friendly to them; farmers threatened by environmentalists.
I am in this field and have three not-very-well publicized facts about indirect rates to share:
1. They are typically expressed as a percentage of the direct cost, not the total grant amount. For example, if the direct cost is $100, and the negotiated indirect rate is "60%" then the indirect reimbursement would be $60, which then makes a total grant of $160, so 60/160=37.5%. It's so stupid that scientists let the accounting shorthand become the number we discuss because it makes it seem like huge amounts are being siphoned off when it is actually about half as bad as people think.
2. The amount of indirect costs that can go to administration is capped at 26% (of the total grant) and as far as I can tell, universities were already hitting that cap circa 2010, before DEI really took off. So the incentives are not really as aligned as people may think for more indirects to lead directly to more administrators.
3. Research is expensive! Even with indirect costs, universities lose money on doing biomedical research and fill in the gaps with undergrad tuition, clinical fees if they have a hospital, state funds if they're a state school and philanthropy if they can find any rich patrons.
There's a great book for the wonks on here about university finances by a former chancellor for finance of UC San Francisco. It's published with a CC license so is freely available here: https://escholarship.org/uc/item/59p124ds
The second Trump administration is worse than I expected. I wonder if they are trying to cause a recession. Maybe the inner circle think four years of continuous expansion are impossible, and want to get their recession out of the way early, so that it will be morning in America in 2028. This worked for Reagan, though I’m not sure how much his administration coordinated with Volcker, whose policies at the Fed caused the 1981 recession.
I don’t wanna assume Trump, Musk, et al. have much in the way of foresight or strategic thinking, but if we offer them this charity then we could see them attempting to engineer a recession where the increased unemployment is concentrated among committed Democrats and blue states. That would include their defunding of scientific research and universities in general as well as NGOs. The impacted individuals would cut back on consumption, suppressing aggregate demand, and thereby putting downward pressure on inflation. Ultimately that could allow the Fed to cut interest rates. From the perspective of Republican leaning constituencies and some swing voters, this could be a net positive with prices stabilizing and borrowing costs for mortgages and car loans falling—without elevated unemployment among these groups.
Again, I doubt the administration is engaged in, or even capable of, conceiving and executing such a plan.
"... attempting to engineer a recession...."
Chris Rufo's preoccupations are a better guide to the regime's thinking on this front, and he is not an economic thinker at all, just a culture-war jihadi. He's not trying to reduce aggregate demand in college-towns, he wants to crush universities because they champion enlightenment values, and those lead to secular liberalism.
Kyla Scalon I think did a good job of describing some of the thinking on this out there (out there in both senses)…
https://kyla.substack.com/p/an-orchestrated-recession-trumps?utm_medium=web&triedRedirect=true
Icymi. I posted this before.
Interesting, but the likelihood that Trump has the patience to wait out 6 months to a year of economic collapse is negligible, unless he really is totally and completely checked out.
Trump might plausibly think “I understand my immigration and tariff policies could cause short term pain, but that won’t really hurt my party so early in the election cycle.”
Carter brought in Volker in ‘79 as the realization that a serious recession was needed to tame inflation began to be accepted. If you look you’ll see rates began rising in 1980 (Reagan took office in 1981). There was opposition within the Reagan administration and fear as to the depth of the recession. But to his credit Reagan said, “We’re doing this.” And it worked.
Obviously a techno futurist worldview should logically indicate pro-science positions, but shouldn’t nostalgia for the 1940s-50s post war boom also include nostalgia for great impactful science? Why are these cohorts all happy to take on the religious right’s position of anti-science?
They're just a flailing bunch of morons. Nothing, and I repeat NOTHING, is thought out or coherent except their purges of Trump's enemies (THAT part is impressively thorough and devious). Not their foreign policy, not their economic policy, and not their policy on science. It's all memes and impulses and emojis and lib-owning and making it up as they go along. It doesn't help that the guy in charge is an obese near 80 year old with the diet and sleeping habits of a teenage club hopper.
What anti-science positions do you believe the religious right holds?
Some of the signature views of the American Christian right include teaching creationism in schools and opposing stem cell research. Related views that may be considered oppositional to scientific consensus are their opposition to abortion and comprehensive sexual education.
Creationism is a fringe view, even among Christians. Opposition to stem cell research, abortion, and sex ed has nothing to do with science.
The first sentence is absolutely and demonstrably untrue in the United States to an absurd degree. I genuinely don't know how you could think that unless you are in a bubble where almost all the Christians you know are highly educated Catholics and mainline Protestants.
Take a look at this polling: https://news.gallup.com/poll/647594/majority-credits-god-humankind-not-creationism.aspx
Even among Catholics, who have absolutely zero doctrinal obligation to believe in creationism (and indeed one could argue that continued belief in creationism is heretical at this point), 32% subscribe to a hard creationist view that humans were directly created by God within the last 10,000 years.
That poll does not seem to address including creationism in school curricula.
"Opposition to stem cell research, abortion, and sex ed has nothing to do with science."
I think this will be news to a lot of commenters here.
Yeah I think that statement is mostly correct. Those are moral objections, not disagreements about science. People who think harvesting stem cells is morally impermissible aren't denying the scientific value of doing it. I wouldn't say it has *nothing* to do with science (because science is what it effects), but the disagreement is not a disagreement about science.
Could be. But I bet it’s not news, just that they haven’t thought things through.
Hmmm you must not be from the South? Where I grew up, creationism was a given.
They’re pro silicon valley science; government funding is only load-bearing for fields like defense, health, and the humanities
"defense" covers a huge range. Like, government defense funding literally enabled the internet. And it's really not hard to see how AI is relevant to defense.
Silicon valley literally would not exist if it weren't for government funding, and not just because of "inventing the internet". Google is one of the more famous ones; there was a whole stink from progressives in the early aughts about the fact that most of the search engine tech was invented while they were at a public University.
Hi Matt Y,
Fantastic post, thank you for writing this! I have to get to work, but a couple of thoughts:
1. Do not ever apologize for writing "Trump is bad" posts. Trump *is* bad, and it's ok to say it! Of course I get you don't want to write those posts every day (plenty of other Substacks specialize in this).
2. Exactly right about NIH indirect costs: is there some bloat? Yes. Could they be negotiated down? Most likely. Is slashing them to 15% a good idea? No, it effing isn't! It's not like all this money is going to "Assistant Vice-Provost for DEI" or some such bullshit, the majority of it goes toward actually useful things that are necessary to make the university run.
Fun fact: I spoke with my department chairman (at a large, well regarded, public R1 university - R1 means a university that specializes in scientific research, in contrast with a predominantly teaching university). I asked him what would happen if the NIH did cut indirect costs to 15%. He said there's absolutely no way we can run the department on 15% indirects. What would we do then? I asked. There was a moment of awkward silence and then he said something about "considering various options" that came across as "it would be very bad, but I don't want to tell you the specifics because I don't want to scare you."
Please note, a lot of the funding goes to admin support that is NECESSARY to comply with important federal regulations! Anyone who works with biohazardous materials (pathogens, recombinant DNA) must comply with biosafety rules, anyone who works with human volunteers must comply with IRB (Institutional Review Board) rules, and anyone who uses vertebrate animals, like lab mice, must comply with rules for ethical treatment of said animals.
You can't just fire a bunch of biosafety and IRB and IACUC (Institutional Animal Care and Use Committee) admins and expect everything to go just fine!
I swear, if the indirect cost cut goes through, I will be sorely tempted to take a bag of biohazard waste, put it in my car, drive to the Trumpiest neighborhood near me, and dump that bag in the middle of a sidewalk. Here you go, folks, a little gift from your friendly neighborhood scientist! You wanted to kill biosafety oversight, you got it. (Of course I'd never do it, but it's fun to fantasize about it!)
3. Trump 1.0 championing Operation Warp Speed (one of the few things his administration did right) was a real missed opportunity for Trump to align himself permanently with scientists as opposed to anti-science, anti-vaxx cranks. Sad!