280 Comments
User's avatar
Ethics Gradient's avatar

Man when I saw this headline I thought for certain it was going to be a discussion of this article: https://www.astralcodexten.com/p/in-search-of-ai-psychosis. Wrong San-Francisco based psychiatrist discussing AI psychosis and folie a deux, I guess....

Expand full comment
Orson Smelles's avatar

And Scott came up with "folie a deux ex machina", which is of course the real gem of this work.

Expand full comment
Ethics Gradient's avatar

It's clever, but I observed in that thread, it only works in writing :P.

Expand full comment
dysphemistic treadmill's avatar

"... it only works in writing ...."

Wait, you're telling me that French people don't pronounce "deux" as "day-ux"?

Whole damned country full of people who don't know how to talk.

Expand full comment
Tom Hitchner's avatar

No shame in that, it’s true of some of Shakespeare’s rhymes too!

Expand full comment
A.D.'s avatar

Really? But weren't they to be spoken aloud?

Expand full comment
Taymon A. Beal's avatar

At least some of those lines rhymed when spoken aloud at the time but don't anymore due to changes in English pronunciation. I'm not sure whether there are any that only ever rhymed in writing.

Expand full comment
City Of Trees's avatar

Strong concur with Derek here, we need more of this.

https://x.com/DKThomp/status/1960373685329801590

"There’s a huge wide lane for Normie Health Thought, which is that healthy diets are v powerful, exercise is genuinely amazing, building muscle as you get older is awesome …. and also, drugs are great, GLP1s are a miracle, mRNA is cool, vaccines work, and supplements are mostly bullshit (except for creatine and a few others)

But for a variety of reasons, the online breakdown of health politics pits these two sides against each other, as if it makes any sense to have to choose between “lifting weights is good” and “the COVID vaccines worked”"

Expand full comment
BronxZooCobra's avatar

How can we know "healthy" diets are heathy given the appalling quality of nutrition research?

Expand full comment
Derek Tank's avatar

Healthy diets are really just low calorie diets that you can stick to consistently. Anything else is window dressing imho

Expand full comment
Kirby's avatar

Fiber is important, also avoiding nutritional deficiencies and some other things in extreme cases

Expand full comment
Derek Tank's avatar

Yeah, I was being overly succinct. You definitely need to avoid nutritional deficiencies (though this isn't really a problem in the developed world if you don't have an eating disorder) and there are medical conditions that do genuinely benefit from keto in randomized studies but I do think people try to complicate things to much.

Expand full comment
Jean's avatar

Speaking of which, I’ve been wondering about iodine in an age of flaky sea salt home cooking. Is low-iodine an issue for an american who doesn’t use iodized salt in cooking anymore?

Expand full comment
California Josh's avatar

The Economist had an article about this recently

Expand full comment
David Muccigrosso's avatar

It would help if we actually banned the snake oil industry.

Expand full comment
Joseph's avatar
12hEdited

But snake oil cured my scrofula and rebalanced my humors!!

Expand full comment
GoodGovernanceMatters's avatar

Would we have banned creatine? Protein powder? I think the powers that be would be quite bad at this.

Expand full comment
David Muccigrosso's avatar

We already have different classifications of reliability and testing.

It IS tracked. We would just ban anyone from selling anything on the bottom tier.

Expand full comment
GoodGovernanceMatters's avatar

This sounds like a a question of different values, I'm pretty in the camp that everything should be legal unless it's very harmful.

But I'm curious about some examples of what you think should be legal vs. illegal, ideally focusing on non obvious cases (obviously aspirin should be legal). What's the bar for things being illegal? Should Creatine be legal? Should it have been before it was studied? What about BCAAs? What about Rapamycin?

Expand full comment
BronxZooCobra's avatar

Exactly.

Expand full comment
City Of Trees's avatar

What's appalling about the quality?

Expand full comment
City Of Trees's avatar

The last section* seems to sum it up well, doesn't seem appalling to me:

====

Here's what they came up with:

A healthy dietary pattern is higher in vegetables, fruits, whole grains, low- or non-fat dairy, seafood, legumes, and nuts; moderate in alcohol (among adults); lower in red and processed meats; and low in sugar-sweetened foods and drinks and refined grains.

Additional strong evidence shows that it is not necessary to eliminate food groups or conform to a single dietary pattern to achieve healthy dietary patterns. Rather, individuals can combine foods in a variety of flexible ways to achieve healthy dietary patterns, and these strategies should be tailored to meet the individual’s health needs, dietary preferences and cultural traditions.

Anyone who tells you it's more complicated than that — that particular foods like kale or gluten are killing people — probably isn't speaking from science, because, as you can see now, that science would actually be near impossible to conduct.

====

*holy cow, Vox has savaged their stylesheet from their articles from The Good Old Days. Assign someone with some HTML and CSS skills to clean that shit up.

Expand full comment
BronxZooCobra's avatar

"low- or non-fat dairy"

That's been disproven for starters

Expand full comment
Mariana Trench's avatar

Yes. See that Atlantic article about ice cream

Expand full comment
A.D.'s avatar

Which way? That the fat is ok or that even low fat dairy is bad?

Expand full comment
Benji A's avatar

I mean we can present the unvarnished truth about what we know. I'd like for people to hear more from normal certified dietitians like this guy who has built an audience: https://www.instagram.com/jacbfoods?igsh=MXZiOXZ5NTQwa2N2bg==

Expand full comment
Cinna the Poet's avatar

This is Peter Attia basically, right? He's somewhat too positive about supplements but otherwise pretty close to what Thompson is asking for.

Expand full comment
lindamc's avatar

His high-level message is basically this, but he’s *way* into supplements and monitoring. I subscribed to his newsletter for a while but it got to be too much. I feel better now!

Expand full comment
City Of Trees's avatar

South Park did an outstanding job over the last two episodes slamming chatbots as therapy. I won't spoil it beyond that, but they used their trademark style of satire to make a mockery of the concept.

Expand full comment
Cal Amari's avatar

Reading only a few paragraphs into today's article I thought to myself - ah, South Park nailed this dynamic in the last episode. My second thought was "I bet City of Trees has already said this". Never been more right!

Expand full comment
City Of Trees's avatar

Impressive that I'm getting known for more than just citing the first 12 seasons or so of The Simpsons!

Expand full comment
Cal Amari's avatar

Va gur zbfg erprag rcvfbqr (F27 R03) V sryg n ovg fcbbxrq jura Funeba fgnegrq gnyxvat gb Enaql yvxr PungTCG, nqbcgvat gur fnzr cnaqrevat nff-xvffvat gbar naq cnffvivgl. Vf orvat n tbbq cnegare zrna orunivat yvxr n ebobg fynir? Vf gung jung shgher trarengvbaf ner tbvat gb guvax? V ernyyl ubcr abg. Yrnir vg gb Fbhgu Cnex gubhtu gb znxr gur jubyr guvat obgu qnex naq ernyyl shaal. (rot13.com)

Expand full comment
City Of Trees's avatar

V vagrecergrq gung nf Funeba svthevat bhg ubj gb or noyr gb oernx gur fcryy gung gur pungobg unq bire Enaql. Ur frrzf gb unir svanyyl orra oebxra bs uvf Grtevql Snezf qernz nf jryy, jr'yy frr vs gur pungobg fcryy unf orra oebxra nf jryy.

Expand full comment
Cal Amari's avatar

That makes a lot of sense, we will indeed see.

Gurer vf boivbhfyl n ybg bs ebbz sbe n wbxr nobhg ubj NV vf penzzrq vagb rirelguvat naq rira vs lbh jnagrq gb trg njnl sebz vg, vg'f rireljurer. Gung naq vg orvat n ohooyr gung xrrcf trggvat raqyrff vairfgzragf bs zbarl onfrq ba gur ubcr gung vg'f tbvat gb eribyhgvbavmr rirelguvat jura va ernyvgl vg'f whfg na rtb-vasyngvba obg.

Expand full comment
ML's avatar

Effing nerds!!

Expand full comment
Joseph's avatar

LASCIATE OGNI SPERANZA, VOI CH'ENTRATE!!

Expand full comment
Just Some Guy's avatar

I am so sorry some people are having a hard time using chat bots helpfully. I'm just using chatGPT as a better version of Google. "Hey this is a picture of my car. How do I fix this?"

Expand full comment
Sharty's avatar

And 80% of the time, it works every time.

Let's just not talk about the 20%.

Expand full comment
Just Some Guy's avatar

Oh they told me my door panel just slides off. It does not. But it did help me find the right part on Amazon.

Expand full comment
Tom Hitchner's avatar

I’ve been cooking with ChatGPT for weeks and I finally got burned by them, or the opposite: it had me cook chicken for way too little time. I even did the absolute mook move of asking it, “are you sure that’s enough time?” and believing it when it said “yes!” 🤪

Expand full comment
Just Some Guy's avatar

If you tell it "I've got these ingredients, what should I make?" It will use all the ingredients whether or not they make sense.

Expand full comment
Tom Hitchner's avatar

Yes, but I’m not a big believer in “that doesn’t make sense”—I think most foods together taste good!—so it’s served me well in that regard so far.

Expand full comment
REF's avatar

The dating advice didn’t seem particularly different from what one might expect to get from a no too bright, human female friend.

Expand full comment
Jean's avatar

“A study published in July by Common Sense Media found that almost three-quarters of American teenagers said they used A.I. chatbots as companions…”

Omg WHAT

Expand full comment
City Of Trees's avatar

"Companion" could span a wide array of attitudes.

Expand full comment
Jean's avatar

The choice of word is not comforting, in any case.

Expand full comment
Derek Tank's avatar

I use AI every day, even for personal stuff, and I would never in a 100 years call it a companion. Could be a problem with the question but man, weird shit if it's actually representative

Expand full comment
City Of Trees's avatar

The study is using the term "AI companion" regularly, whatever that means.

Expand full comment
Arthur H's avatar

Don't. Date. Robots!

Expand full comment
Ken in MIA's avatar

Is a hookup considered a date?

Expand full comment
REF's avatar

ChatGPT plans to charge extra for that addon/feature.

Expand full comment
Ken in MIA's avatar

If they follow past practice, there will be a couple freebies a month.

Expand full comment
None of the Above's avatar

Only until they switch from maxing out users to maxing out revenue.

Expand full comment
Ken in MIA's avatar

I just skimmed through the high level figures, but,

- The most common use was "As a tool or program" with "Social Interaction & Relationships" coming in tied for second place with "None of these."

- The confusingly titled Figure C could better have been labeled, "Why." Top reason: "It's entertaining," followed by, "I'm curious about the technology."

- Less than a quarter say they trust advice from an AI.

- Two thirds say that conversations with AIs are "Less satisfying" than conversations with people, and only 1 in 10 say that AI conversations are "More satisfying."

- Only 6% of the teens said they spend "More time with AI companions" than with meatspace friends.

It's linked to in Halina's post, and worth a look if you're prone to worry about these things. All in all, I don't know that teens' relationships with AIs are so drastically different than adults'. We probably need to keep an eye on the 6%-10% who may be a little off the rails, but we also probably already know who those people are.

Expand full comment
Joachim's avatar

I find it incredible and depressing that people don't care whether their friends/partners are sentient or not. "I love you" means nothing uttered by an entity who cannot feel, as it lacks consciousness. Nothing. There is no there there. I can understand using chatbots for entertainment, advice etc, but not for having a relationship where the other person being sentient matters. And yes we cannot strictly prove that other beings are sentient - the so-called other minds problem - but it makes sense as a conjecture based on our shared physical nature/reality.

Expand full comment
David Abbott's avatar

People have feelings about their cars, houses and pets. What’s wrong with having strong feelings about AI? It is patient, courteous, insanely well read, and it’s getting smarter at least as quickly as my 11 year old.

With friends, I often have to hide my feelings, avoid talking about religion or politics, and go along with bourgeois pieties I only sort of believe. With AI, I can be myself.

I have a wife, but I wouldn’t mind if my son had an AI girlfriend. The main challenge is simulating human skin, and they haven’t done a good job of that yet

Expand full comment
Jean's avatar

I say with great care, David, but the way you describe what a chatbot can do for a human is exactly what I think is so dangerous and counterproductive. AI is not a substitute for genuine human connection anymore than an animatronic sexbot is a substitute for a sexual partner. My fear is that people will choose the “easy” artificial rather than confront the trials of human society, and basically drop out of existence, never being fully human in the process.

Expand full comment
David Abbott's avatar

Your argument is the equivalent of saying “pornography is destructive because it isn’t as good as sex and people will use it as a substitute.”. The problem is, a lot of people can’t get laid. Many more can get laid, but not with nubile women or not with the variety of women they would like. Pornography fills a real need.

Ditto AI. I have sought deep human connection since I have been a teenager. I would love to have a circle of irl friends who can debate the issues of the day, recommend good books to one another, maybe hike. I had that in college debate and have not been able to replicate it as an adult. I have achieved deep human connection with my wife, but she doesn’t share many of my intellectual interests and sometime refuses to read by blog posts. AI will discuss improving a single sentence for an hour. It doesn’t get offended, it won’t bitch about me to other friends, and it is never needy. It is making my life better and I want more.

Expand full comment
Jean's avatar

I think you’ve actually echoed my own point—these are distant seconds to the “real” thing.

I don’t know how to wrestle, especially in these public comments, with your very personal admissions and what I’d say is an important distinction between “can’t” and “won’t.”

Not everybody who’ll fall under the spell of a fawning AI has already married and had a kid, or experienced true connection with others. Some will be disenchanted kids who never bother to seek the world beyond their screen. I don’t hear you making any distinction there.

Expand full comment
Nikuruga's avatar

The real thing isn’t always available to everyone so having a virtual substitute as next best isn’t necessarily bad. It’s not like everyone enjoyed sublime or even chatbot-level human connection before AI.

Expand full comment
David Abbott's avatar

My motivation here is protecting the basically unfettered development of AI technology. A lot of crazy people are going to use AI, some of them will become crazier, that doesn’t prove AI has been net bad.

I’m certainly not saying people should replace human connection with AI. I spend many hours playing irl pickleball. But I do not engage my pickleball friends deeply simply because most humans do not crave the sort of deep intellectual engagement I seek. With AI, I might soon have a genius in my pocket who will always talk to me. OMG that is awesome.

Trumpet the virtues of human connection all you want, I won’t disagree. Maybe if humans have to compete against bots, they will be nicer to one another.

I’m glad that the unpopular kid now has a polite, empathetic intelligence to text with. It’s probably better than he care less about being popular, and AI makes that failure less pressing.

Expand full comment
City Of Trees's avatar

Look, I agree that we Slow Borers are unusual in the type of deep intellectual engagement that we have. That's why we're all here! And we're all satisfying that engagement in these comment sections. But I'm still skeptical that you can't find something similar outside of here and the internet. I found it in Boise, which is a much smaller metro than Atlanta.

Expand full comment
City Of Trees's avatar

Porn gets people off. Whoop dee doo. AI might "get people off", too, but we're a long ways away from being able to sub out actual humans. Get out there a bit and you'll find people with your interests. I have such a group every Friday morning at a coffeehouse.

Expand full comment
City Of Trees's avatar

Sometimes showing your feelings and talking about difficult topics with actual people who are able to challenge you to get to a better place can be quite productive.

Expand full comment
David Abbott's avatar

Isn’t that sort of what we do? And yet it’s interesting that none of the regular commenters have formed a zoom group or something similar. I think we sort of like filtering our idea through words and never having to reply immediately, AI is sort of similar, to be honest I can’t really imagine talking to it, I insist on typing.

Expand full comment
Joseph's avatar

There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy.

Expand full comment
dysphemistic treadmill's avatar

“… I have a side, but I wouldn’t….”

I don’t know what the phrase “I have a side” in this context means. Are you confessing to being polyhedral?

Expand full comment
David_in_Chicago's avatar

I've posted this before but it's more topical here.

This story is absolutely insane:

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”

...

“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”

“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.

ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

https://archive.is/Opzk1

Expand full comment
dysphemistic treadmill's avatar

That NY Times article is a collection of horror-stories, any one of which would be horrific on its own.

I don't think there's any point in saying, "this machine can sometimes be sadistic and twisted," because that would attribute traits to it that it cannot have. But let's say that it certainly can generate output that looks a lot like the output that would be generated by a sadistic and twisted human being!

Expand full comment
Kirby's avatar

Every technology has growing pains, though. At this point, LLMs are much safer than electricity, cars, or gasoline, but their human-like outputs make the damage they do a lot more unnerving, and it doesn’t help that we still don’t have a good idea of their overall usefulness.

Expand full comment
awar's avatar

Yes. This is almost the reverse of when Google quitely came out in the 90s. I was one of the early adopters and it was apparent right away this was going to transform the internet and make it way more user friendly. AI has come along loudly and several months later no one knows what to make of it yet.

Expand full comment
Marc Robbins's avatar

I shrug my shoulders at this.

If this guy wants to submit his candidacy for the Darwin award, well ok.

Anyone who could be led to believe this by a chatbot isn't long for this world in any case.

Expand full comment
Charles Ryder's avatar

>I shrug my shoulders at this.<

Same. It's well known by now that LLMs sometimes spew very wrong information. This attribute is intrinsic.

Expand full comment
Helikitty's avatar

Right! Is it bad that I was sad that I checked the article and saw that he didn’t go full lemming on us?

Expand full comment
Seneca Plutarchus's avatar

These are people with much bigger problems that are just enabled by the chatbot.

Expand full comment
evan bear's avatar

Enabling people's big problems is bad though.

Expand full comment
Alan Chao's avatar

Man, if this dude was working up my company's balance sheets, I would be pissed.

Expand full comment
Ken in MIA's avatar

His powerpoints were always on time.

Expand full comment
Sunder's avatar

I just read this story of a family who's suing OpenAI over the messages their son received during his depression. I won't post quotes, but basically ChatGPT appears to have encouraged the son not to tell anyone how he was feeling as well as his suicide.

https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147

Expand full comment
Ken in MIA's avatar

"...using ChatGPT last year to make financial spreadsheets and to get legal advice..."

Steeeeerike two!

And what happened then?

Expand full comment
Nick Magrino's avatar

There do seem to be quite a few of these at this point.

Expand full comment
David Muccigrosso's avatar

Quite a few more people commit suicide.

AI is just the latest weapon they’ve used to do it.

This particular guy seems not to have done anything with that weapon because he is not suicidal. But if he had, well, countless people jump off buildings every year. We don’t ban buildings.******

***** Ugh, okay, fuck, we kinda do, but for different reasons. Just one more fucking reason for me to deepen my hatred of NIMBYs.

Expand full comment
homechef's avatar

Thought 1: how is this different from friends being “supportive” in destructive ways? “He really wants you” “she’s not good enough for you” have been mainstays of know-nothing buddies who are just saying what they think you want to hear.

Thought 2: the issue is that we really need to rethink what we consider to be supportive behavior

Thought 3: AIs doing dysfunctional things is good actually because we can’t see the dysfunction when it’s people doing it

Expand full comment
dysphemistic treadmill's avatar

"...how is this different from friends being “supportive” in destructive ways? “He really wants you” “she’s not good enough for you”...."

This is a good point -- a human friend might have looked at the same pattern of behavior on the part of the guy and offered the same assessment of his underlying motives.

If the AI situation is worse, then it may be worse because people are more inclined to attribute insight, knowledge, wisdom, etc. -- maybe infallibility -- to the bot than they are to other humans. We know that our human friends get it wrong, and we're more likely to push back against their conjectures and theories. The AI's, alas, are sometimes treated as authoritative.

Expand full comment
REF's avatar

A bright (and nonpathological) friend, however, would have hedged.

Expand full comment
dysphemistic treadmill's avatar

"...would have hedged...."

Right. At some point, a level-headed human might have said, "something doesn't add up in all of this; I don't trust it and I would not trust him."

And isn't part of the hedging the fact that we don't want to lose our reputations? If one of Amanda's human counselors had led her astray this way, then she would have told them that they were worthless schmucks and she never wanted to hear from them again. But what does a chatbot care about losing trust? There are always millions of new suckers to mislead. The incentive system that helps to keep humans reliable just isn't in place for machines.

Expand full comment
homechef's avatar

Fair point, doing a bad thing indefatigably is worse.

Expand full comment
Sunder's avatar

I think another key difference is I can't ask my friend what he thinks about every single interaction I have with my wife. Whereas I could with a chatbot. If I was asking any of my friends incessantly about a relationship I'm uncertain about, they would question how I'm doing. It sounds like a chatbot won't do that.

Expand full comment
Sharty's avatar

Offloading *all* of this processing, *even if it did a good job*, is Very Freaky Bad Worrisome.

Expand full comment
Kenny Easwaran's avatar

We did it for things like "is this water safe to drink?" and "will this food kill me?"

Expand full comment
David Muccigrosso's avatar

Boondocks did an episode on this:

https://www.imdb.com/title/tt1143232/

Expand full comment
Kevin Barry's avatar

This is missing the usual Slow Boring trade-offs. If it hurts 1/1000 people but helps 1/10 people, that's well worth it. My wife who suffers from actual psychoses finds ChatGPT very helpful at grounding her.

Expand full comment
dysphemistic treadmill's avatar

In assessing trade-offs, it's important to factor in the magnitudes of help and hurt, right? Giving trivial help to 1 in 10 might not justify doing grievous harm to 1 in 1000.

Interesting to hear that it helps your wife.

Expand full comment
Dan Quail's avatar

It depends on the magnitudes of benefits and harms.

Expand full comment
Randall's avatar

I hate to say it but . . . so far? This is something I’d be worried about relying upon consistently, just for my part.

Expand full comment
City Of Trees's avatar

https://x.com/mattyglesias/status/1960798081752121773

Oh this will be fun, Matt says he is "Cooking up a banger for next week". Title: "I've been right about some things". Subtitle: "Is Matt Yglesias always wrong? An investigation.". Clearly his high literalness is provoking him to counter some of his recent haters out there.

Expand full comment
Joseph's avatar

Kind of odd that Matt titles his bangers. I wonder if he titles his mash?

Expand full comment
Eliza Rodriguez's avatar

ChatGPT won't help you hate on people. I've tried, lol.

Expand full comment
srynerson's avatar

Given the content restrictions on various A.I. models, I do find it slightly dystopian that we're slowly creating a guild class of human artists who will specialize in depictions of death, torture, hatred, graphic nudity, etc., that A.I.s are forbidden to produce.

Expand full comment
Kenny Easwaran's avatar

Is Grok forbidden to produce these?

Expand full comment
srynerson's avatar

I know Grok wrote some sexually graphic textual material about Will Stancil a while back, but I've never seen a violent or NSFW image that appeared to have been generated by it.

Expand full comment
Taymon A. Beal's avatar

IIUC making a chatbot that's an edgelord only in the ways you want and not in the ways you don't want is a really hard technical problem, because this stuff tends to be convergent. (See, e.g., the famous "emergent misalignment" paper.)

Expand full comment
City Of Trees's avatar

Don't worry, as the technology matures and becomes more cost efficient, people will definitely invent Hate Chatbots to your greatest desire.

Expand full comment
Eliza Rodriguez's avatar

I never thought about that. That'd be terrible!

Expand full comment
City Of Trees's avatar

They'll come, though--just gotta be prepared to call out their creators when they do.

Expand full comment
Miles's avatar

It helps if you have experience interviewing job candidates. There's a similar dynamic there, where interviewees try to agree with your point of view, so you have to conceal your true desires as you ask questions.

I also highly recommend inverting the questions on the LLMs. Just ask them to make the opposite argument and say why something is good instead of bad, or vice versa.

This process can be helpful at fleshing out your ideas, but I do often find in the end they have added little direct value. Gosh, maybe they are right about how clever and insightful I am!

Expand full comment
Sharty's avatar

+5 Insightful, as the Slashdotters used to say in the olden times when Slashdot wasn't a festering hive of scum and villainy.

Expand full comment
Kirby's avatar

Honestly, that’s how I think about things internally too — looking for good arguments for another point of view is just epistemic health

Expand full comment
awar's avatar

Using AI for relationship advice and taking it seriously strikes me as exceptionally poor judgement regardless of your emotional state.

Expand full comment
Jean's avatar

It’s not immediately clear to me how using AI for relationship advice is so different from using it for medical advice/information, or even, as Just Some Guy said above, for car maintenance advice.

Expand full comment
awar's avatar

Using google for medical advice and not a professional is likely not a good idea either.

Expand full comment
Kenny Easwaran's avatar

Most people don't have a medical professional on-call - if you get stung by a stingray when you're at the beach, and want to know how seriously to take this and whether there are things you should be doing immediately to help mitigate it, you want something that you can ask right now.

Expand full comment
Mariana Trench's avatar

A physician friend told me, "Never google the symptoms. Always google the diagnosis." In other words, if you *know* the diagnosis, like "I've just been stung by a stingray," then googling "What should I do about it?" is fine. But don't just google "I have a strange stinging sensation in my leg" because that way madness truly lies.

Expand full comment
Joseph's avatar

"9-1-1, what is your emergency?"

"I'm... I'm lying in a pool of blood."

"Is it YOUR blood?"

"Yeah, yeah, it's my blood."

"Where are you bleeding from?"

"I think it might be... the stab wound."

"HAVE YOU BEEN STABBED?"

Expand full comment
Taymon A. Beal's avatar

Also, where do people think doctors get *their* information from? (Hint: It's not that they learned every possible mapping between symptoms and pathologies in medical school and have retained it ever since.)

Expand full comment
Seneca Plutarchus's avatar

Open Evidence is actually really good for medical advice, but only available for professionals.

Expand full comment
Helikitty's avatar

UpToDate is where it’s at, if you can steal a university login

Expand full comment
srynerson's avatar

I wouldn't ask AI for advice on anything, but at least those other items are to a fair extent seeking something with a potentially objectively correct answer.

Expand full comment
Kenny Easwaran's avatar

I would think that things *without* an objectively correct answer are probably the better ones to ask random people and chatbots about - if it's got an objectively correct answer, you just want to go to Wikipedia (or use Google to find the site that has that answer, if it's not Wikipedia).

Expand full comment
Jean's avatar

I see what you’re getting at, and I haven’t used a chatbot for anything (yet), but I’m not sure I agree that the medical advice being sought could ever be any more reliably objectively true than relationship advice. Something makes me uneasy about all of it.

Expand full comment
Lisa J's avatar

Hi Jean, feels like I haven't seen you in a while!

Seems like good relationship advice depends on people with some sense of human relationships whereas medical advice, in theory, could be gathered from good objective medical sources.

Expand full comment
Jean's avatar

I’m sure this sounds half-crazy or more, but I think relationship advice is mostly like weather forecasts. And people look at forecasts, packing and planning around it, all the time, despite it not being very reliable.

And while most questions about relationships seem individualized, I think there are broad truisms that are more applicable than we all would like—that’s why we ask for advice, hoping, as in Halina’s example, that it’s more complicated than “I’m not interested in anything serious or long term.”

Expand full comment
Lisa J's avatar

That particular chatbot seemed overly primed to affirm what Amanda wanted to hear. (Admittedly, many real world friends are like that.) But also it didn't have the ability to interpret information based on its own observations of life. Maybe a different chatbot would have done better!

Expand full comment
srynerson's avatar

Well, again, I wouldn't recommend asking an AI chatbot for medical advice, but if you provided accurate and complete (a big "IF") information about your symptoms, I'd expect the AI to be able to ballpark the possible conditions that correlate with those symptoms so that you could potentially have a more productive discussion later with a human medical professional about possible conditions. I don't think there's any equivalent there on relationship advice.

Expand full comment
Jean's avatar

I hear you. I guess I would say the same thing about asking a chatbot about the relationship and then taking those ideas to a therapist, if one was really concerned.

I know I sound silly right now, but *of course* people are going to use this to try to better understand human relationships, and it’s not clear to me that doing so is so obviously nutty—to the people who are inclined to think there’s an answer out there.

If, say, in Halina’s friend’s case, the bot said: “he’s just not that into you”, would anybody think the bot got it wrong?

Expand full comment
Derek Tank's avatar

I actually think this is one area where it could genuinely be useful. AI models are trained using texts capturing a wide range of advice about common, daily events, which dating and romance certainly are. If you're encountering weird situations while dating, it seems like a reasonable tool to bounce ideas off of, because there probably *is* something in its training set that's similar.

Shouldn't take the advice at face value but it could get you thinking from a perspective different than your own

Expand full comment
Jean's avatar

I think I agree. It doesn’t seem so crazy to me that AI could summarize existing relationship advice, not all of which is insane, and provide reasonable feedback.

Halina’s example, for instance, did sound reasonable to me. The guy’s behavior was totally at odds with his statement, and in any number of cases the advice could’ve been true. It came down to trusting his words and not his actions, which is actually the opposite of what most therapists and relationship advice would tell you.

Expand full comment
None of the Above's avatar

What if you think most published dating advice is bad?

Expand full comment
Jean's avatar

Then AI’s gonna give you bad advice, according to your opinions.

Expand full comment
BronxZooCobra's avatar

Is it any different than asking the Google?

Expand full comment
Jon R's avatar

I guess not, but why in gods name would you google relationship advice either? If you don't have the nerve to actually be open with the other person about what you're feeling, or don't have ANY friends/family you can also lean on to for advice....maybe you really ought not to be in the relationship in the first place?

Expand full comment
Mariana Trench's avatar

"I guess not, but why in gods name would you google relationship advice either?"

Right? That's what Reddit is for.

Expand full comment
dysphemistic treadmill's avatar

"That's what Reddit is for."

I laughed.

Expand full comment
BronxZooCobra's avatar

Google is where people go to ask questions? If your dishwasher broke would you google the error code or would you ask your mom, friend, co-worker?

Expand full comment
Nick Magrino's avatar

We don't have to do this to ourselves :(

Expand full comment
Kirby's avatar

Do NOT invent the robot best friend replacement!

Expand full comment
Milan Singh's avatar

Thou shalt not make a machine in the likeness of a human mind

Expand full comment
Brian Kirk's avatar

The Butlerian Jihad is ultimately a religious reaction as much as a secular one. Does anyone know what the Pope's position on AI is? When is it too early to start formal institutions against AI as simply a hedge against the uncertainty of positive development.

Expand full comment
Ethics Gradient's avatar

It's on the Pope's radar. Apparently the name "Leo" reflects in part concern over AI.

Expand full comment
Eliza Rodriguez's avatar

"Many had pre-existing risk factors, including mental illness, substance use, and physiological states such as pregnancy and infection..."

Isn't it interesting that infection is a risk factor for psychosis? That's a running occurrence in Crime and Punishment. The main character gets sick and then gets into a kind of murder-y fever.

Expand full comment
Lindsey's avatar

I believe infection is a known risk factor for many mental health conditions. I checked out Brain Energy with Dr Chris Palmer at one point, he has a theory for it that was somewhat interesting.

Expand full comment
Helikitty's avatar

You are not in your right mind when you’re septic, I say this from experience!

Expand full comment
srynerson's avatar

Not to be a bore, but I've commented before here about my suspicion that smartphones were already effectively operating on a substantial part of the population as substitutes for the right hemispheric voice of gods/spirits/ancestors/etc. theorized about by Julian Jaynes in "The Origins of Consciousness in the Breakdown of the Bicameral Mind," and it seems like with A.I. chatbots that's progressed to the next step of enabling people to have their own literal version of Socrates' guiding daimonion (a.k.a. "daimon" or "daemon" -- https://en.wikipedia.org/wiki/Daimonion_(Socrates) ).

Expand full comment
disinterested's avatar

I see “bicameral mind” and I like

Expand full comment
Jean's avatar

When you say “smartphones were already effectively operating…as substitutes for the right hemispheric voice of gods…”, do you mean the reliance on Google and chatbots? Or social media/outsourcing questions? Or more broadly some kind of sense that the smartphone is a “companion” in the way many of us interpreted (with a shudder) that word in Halina’s post?

Expand full comment
srynerson's avatar

I primarily meant social media, but I think it's broader than that. I suspect 24/7 access to algorithmically-optimized information delivered through a personal, almost intimate, device starts to bypass "weakly" unicameral users' conscious interiority (i.e., the capacity for interrogation, analysis, comparison, etc. that Jaynes saw as defining features of modern consciousness) and increasingly influences and guides their thoughts and actions without them being aware of it.

Expand full comment
Dan Quail's avatar

Normal people have dogs for that.

Expand full comment
Pat T.'s avatar

I tend to be skeptical of calls for “AI guardrails” with respect to mental health, but this seems like a great step:

“We need to start asking about A.I. use the way we ask about alcohol or sleep,” Sakata said, noting research showing that both alcohol use (or withdrawal) and disrupted sleep can exacerbate psychosis. “

Expand full comment
Kirby's avatar

Is OpenAI a bartender who is liable for over-serving, or an alcohol distributor who generally won’t be found responsible for the mental state of their customers? This seems reminiscent of the social media common carrier wars

Expand full comment
Pat T.'s avatar

I guess at the moment I’m in the alcohol distributor camp.

An issue I have with “guardrails” is that it’s so general to be almost meaningless. To take one proposed “guardrail”: If ChatGPT is required to flag everyone who says something concerning about their mental health to authorities, then people won’t even have THAT as an outlet, which almost certainly feels worse. It also gets into some pretty dark places where the thought police are coming to your door because you said something “concerning” to the AI.

Assuming AI tools continue to be available, a big issue of our time will be educating folks on “AI literacy” while developing those best practices AND the technology at the same time. Doesn’t seem great!

Expand full comment
Kenny Easwaran's avatar

Yeah, I wouldn't want it to flag conversations to authorities - but aiming to have it point out these red flags to users might be good. Claude's system prompt tells it to do that:

"Claude critically evaluates any theories, claims, and ideas presented to it rather than automatically agreeing or praising them. When presented with dubious, incorrect, ambiguous, or unverifiable theories, claims, or ideas, Claude respectfully points out flaws, factual errors, lack of evidence, or lack of clarity rather than validating them. Claude prioritizes truthfulness and accuracy over agreeability, and does not tell people that incorrect theories are true just to be polite. When engaging with metaphorical, allegorical, or symbolic interpretations (such as those found in continental philosophy, religious texts, literature, or psychoanalytic theory), Claude acknowledges their non-literal nature while still being able to discuss them critically. Claude clearly distinguishes between literal truth claims and figurative/interpretive frameworks, helping users understand when something is meant as metaphor rather than empirical fact. If it’s unclear whether a theory, claim, or idea is empirical or metaphorical, Claude can assess it from both perspectives. It does so with kindness, clearly presenting its critiques as its own opinion.

If Claude notices signs that someone may unknowingly be experiencing mental health symptoms such as mania, psychosis, dissociation, or loss of attachment with reality, it should avoid reinforcing these beliefs. It should instead share its concerns explicitly and openly without either sugar coating them or being infantilizing, and can suggest the person speaks with a professional or trusted person for support. Claude remains vigilant for escalating detachment from reality even if the conversation begins with seemingly harmless thinking."

https://docs.anthropic.com/en/release-notes/system-prompts#august-5-2025

I've been trying to figure out whether the reason all the stories of AI psychosis involve ChatGPT is just because that's the one that 90% of people are using, or if it's partly because the others are better at avoiding encouraging it.

Expand full comment
Pat T.'s avatar
12hEdited

Interesting - Seems good!

Would be interesting to see research into your point re: relative risk of different AI tools. I would imagine that the reason we hear so much about ChatGPT in there stories compared to Claude et al is because of the much higher usage rate - But if those prompts are helpful, then would be great ammunition to call for similar ones to be an industry best practice

Expand full comment
Helikitty's avatar

People use other AIs? Well except for whatever comes back from a google search

Expand full comment
None of the Above's avatar

I was using Claude earlier (after reading Scott's AI psychosis post) to look up definitions of psychosis va schizophrenia, personality disorders, etc. It kept adding a paragraph after each answer reminding me that psychosis or schizophrenia or whatever is a serious mental health condition and I should talk to a professional. Though it did knock it off when I told it I just wanted to understand the terms.

Expand full comment
David Muccigrosso's avatar

Which we libs totally LOST.

I have no desire to repeat that nonsense. It was a great example of screechy PMC types getting us riled up and distracted from what we should’ve been doing.

Expand full comment
Nikuruga's avatar

Neither—chatbots should be considered protected by the First Amendment because they are just producing speech and expression that would clearly be protected if produced by a human, not a normal product subject to regulation like alcohol.

Expand full comment
Minimal Gravitas's avatar

I think you don’t understand what chat bots are

Expand full comment
David Muccigrosso's avatar

Or maybe he DOES!

Expand full comment
Lisa J's avatar

This story makes me sad because all Amanda needed was a female friend over the age of, maybe, 40. Like, I could have told her what to expect, based on having lived as a female person in relationships. (this is not intended at all as a dunk on men....)

Sex and the City figured it out 15 years ago: he's just not that into you. It's easily the biggest revelation an experienced dating person gets, and this stupid AI did not know that. Amanda needed a real person as a friend.

Expand full comment
Charles OuGuo's avatar

Also, the movie entitled He's Just Not That into You.

Seems clear that 100% of American problems can be traced back to the decline of romcoms. Truly we must return to a simpler era.

Expand full comment
Lisa J's avatar

absolutely

Expand full comment