Can an AI health coach fight chronic disease?
Arianna Huffington and Sam Altman say they have a plan
Arianna Huffington and Sam Altman believe their new app will transform the health of our country.
It’s called Thrive AI Health. In their Time op-ed announcing the product, they describe it as “a fully integrated personal AI coach that offers real-time nudges and recommendations unique to you that allows you to take action on your daily behaviors to improve your health.”
Aside from a few examples (it will remind you to trade soda for sparkling water!), the announcement is sparse on details and occasionally sounds like it was written by an AI bot itself. But it demands a certain level of attention for the same reason that many Silicon Valley projects do: It’s supported by influential people and will likely receive substantial funding. And some of the claims in the pitch are truly revolutionary. By training their chatbot on the “best peer-reviewed science,” as well as each individual’s “personal biometric, lab, and other medical data,” Huffington and Altman believe they can help fight chronic disease and significantly change the diets and overall health of millions of people.
At this point, I imagine most people reading will have an eyebrow raised. Artificial intelligence chatbots are still prone to hallucination, there are questions around data security and privacy, and far too little is known about the project for such grandiose claims to be truly credible.
But what could an AI health assistant that consistently nudges us toward better dietary habits actually accomplish? Would it genuinely make Americans healthier? Or is Thrive AI Health another empty Silicon Valley promise?
The problem is hard to solve
Altman and Huffington begin their essay by citing a CDC report that finds 129 million people in the United States suffer from a chronic disease, and that 90% of our healthcare spending is spent treating their associated conditions. They believe they can help ameliorate this crisis by “democratiz[ing] the life-saving benefits of improving daily habits” with their AI health app. Obviously, it’s important for everyone to have access to accurate information about their health and well-being. But Altman and Huffington only list one example of a potential user for their app: a busy professional with diabetes who is struggling to manage blood sugar levels.
And I think that’s a pretty revealing limitation of what the app might be able to accomplish.
A “busy professional” is likely someone of financial means and education, so it’s certainly possible that a push notification reminding them to eat a healthy meal along with a simple recipe for baked chicken thighs might inspire better lifestyle behavior. However, that ignores what the CDC finds to be the main factor driving chronic diseases in the United States: social determinants of health.
Meaning, if you’re from a low-income county or have lower degrees of education, you are more likely to develop a chronic disease.
And the difference isn’t marginal; one study found that someone with less than a high school education was almost twice as likely as someone with a college education to develop diabetes. There are many reasons why lower-income people struggle to be healthier. The stress from poverty makes smoking and alcohol abuse more likely. Healthy foods are generally more expensive than quick, processed foods. Poor children participate less often in fitness activities, helping solidify this health gap at an early age. These are problems that a healthy lifestyle push-notification probably can’t account for, no matter how personalized they may be.
Many healthcare plans already offer chronic disease management programs, and many have been proven to be cost-effective and beneficial for the enrollees. While Huffington has said in a previous interview that the AI health coaches “are not about replacing anything,” they didn’t mention how the app would work with, or improve upon, what this pretty substantial industry offers.
It’s also important to note that the internet has already, to an extent, democratized health information. Advice to limit high-calorie processed food and to exercise moderately throughout the week is just a few clicks away for 95% of Americans with internet access. A recent study from Tufts University found that the number of Americans with poor diets has decreased by 12% over the past 20 years, so perhaps the internet has played a role in spreading healthier habits. However, the bulk of that change has occurred in high-income households, and the prevalence of chronic disease continues to rise.
So the problem likely isn’t that people don’t have enough information to be healthier, it’s that they don’t have the resources, time, or capacity to act on that advice. Or, if someone does possess the ability to get healthier, they might be unable to because it’s just really hard to initiate lifestyle changes. Ozempic and other semaglutides could be the shortcut to rewire our brains and fight chronic illness, but that’s an article for another time.
Do digital health interventions work?
I don’t think it’s right to adopt an entirely fatalistic outlook on this issue. Lifestyle change is possible, and the Thrive AI Health app shouldn’t be wholly discounted just because it might struggle to fulfill its founder’s lofty expectations.
The question is whether digital health interventions (DHIs) — the academic term behavioral scientists use to describe such apps — are actually effective in inducing behavioral change.
Fortunately, quite a few studies have explored this question. In an umbrella analysis — a comprehensive review of systematic analyses — published in the Annals of Behavioral Medicine, researchers gathered systematic reviews to identify the most effective digital health interventions for preventing and managing noncommunicable diseases. Using evidence from 85 reviews that spanned 865,000 participants, the researchers found that in general, “DHIs are effective in improving health-related behaviors, including physical activity, sedentary behavior, diet, weight management, medication adherence, and abstinence from substance use in both general and clinical populations.” Specifically, they found strong evidence that tailored reminders, such as mobile phone push notifications, were especially effective when accompanied by human support and more personalized content. However, they also note that they were unable to identify the sociodemographic status of the population review, so it’s unclear how the DHIs performed across race and class.
Another systematic review exclusively analyzed studies that tested AI chatbot interventions through randomized control trials. The researchers made sure to qualify their findings by noting the modest sample sizes and the fact that a notable number of the studies displayed a “heightened risk of bias.” However, they found that overall, “The accessibility and engagement offered by AI-powered chatbots can significantly enhance patient adherence and participation in intervention protocols.”
Since these sophisticated chatbots are so new, we can’t really know if these behavior changes are sustainable for years or decades down the line. But I think this is largely good news for Altman and Huffington. Dropping a health coach in someone’s pocket definitely has the potential to lead to positive lifestyle changes, and at the very least, it likely won’t lead to worse behavioral habits.
It’s all speculative
Despite the two systematic reviews1 I highlighted, this is still uncertain terrain. The app itself simply doesn’t yet exist. Of course, that means the founders are free to proclaim that their entirely theoretical product will have public health impacts that rival the New Deal. And it’s impossible to concretely say it won’t because the product could really be anything, despite the fact that right now it is nothing.
A lot of the potential products within the field of artificial intelligence feel this way. Proponents champion these technologies, securing substantial investments with grand promises that extend beyond merely reshaping markets — they envision a profound transformation in the very way we live.
Some of them may very well be correct. However, AI adoption rates in many industries remain low. Forecasters predict we won’t see AI’s impact on labor productivity until the year 2027. Whenever these doubts are raised, evangelizers point to the fact AI will continue to get better and the impacts will be felt by everyone soon enough. The technology journalist Charlie Warzel outlined some of these concerns when he interviewed Altman and Huffington after the launch of their product, ultimately concluding in the title of his Atlantic article: “AI Has Become an Article of Faith.”
Faith is really what this is about. In the intro, I glossed over the problems artificial intelligence has with dispensing accurate information with the faith that Sam Altman and his cadre of smart engineers will eventually fix those problems.
But what if they don’t?
There is still debate in the field over whether artificial intelligence will ever be truly accurate. And when it comes to something as high stakes as health advice, accuracy matters. Ultimately, we don’t know what will happen. At this point, it’s all speculative.
More specifically, one is an umbrella review ( a review of all the reviews), and the other is a systematic review.
Once I read the "it will remind you to trade soda pop for sparkling water" parenthetical, my skepticism meter spiked, and it never went down throughout reading the article.
Most of us do not adhere to the optimal level of health management, because most of us want to do some things that aren't optimal. I know I have a few myself. (But not swapping out soda pop for sparkling water, I did that many years ago!) Some people do things that are very suboptimal to their own health. I think that that's regrettable, but ultimately it's their body and they're in charge of what they want to do to it.
Now, I don't want to completely downplay the opportunities that can be had with advances in informational medical technology. For those who want to make a concerted effort to improve their health, this could be a useful tool to help them out. But I think there will be plenty of people who just say, "Nah, I want to do this instead because I enjoy it.". I would think that a top tier model would account for this, and try to perhaps find a harm reduction avenue instead, but some still might just want to opt out of the technology altogether and not be bothered with bogging down their notifications further.
I assume that the conversation with investors goes like this:
“So the user puts in their age and sex. Is that going to give them meaningful health advice?”
“No, no — but then we nudge them to input more personal information — diet habits, sleep, exercise. We offer *personalized* advice!”
“Okay, so does that make them healthier?”
“No, it’s still ineffective. So, we nudge them to put in more personal information— links to medical records, credit card data, dating profiles.”
“And that all makes them healthier?”
“No, no — totally ineffective. And the less it does, the more we nudge them to enter personal information. Look: we’re designing this to make you money. You’re not doing this for your health. “