You know what would go a long way towards answering a lot of these questions, like long covid and more? If everyone's medical records were kept in longitudinal consolidated databases, that can be aggregated, searched with algorithms to spot trends and correlations, etc.
Why don't we have that? In significant part because of overblown privacy fears that get the cost-benefit analysis wrong, loading up the cost side with vague, theoretical, hypothetical fears and dismissing likely concrete benefits.
The government is too hobbled and bureaucratic to get it done, though the recent interoperability rules for medical records are a big step towards making it possible for private companies to do it. Since the government can't/won't do this itself, it should get out of the way and focus on removing barriers and helping private companies do it responsibly, such insurance companies and vertically integrated IDNs. There's an enormous public health payoff if this can get done.
privacy activists are my personal enemies. especially if they're "liberals". destructive little crypto-libertarians who hate the idea of collective good requiring even a little bit of individual sacrifice
Pretty sure it's real and I concur. If the government wanted to arrest political dissidents or do other tyrannical things, it could easily do so with powers that the FBI and NSA already have. Knowing that you had an appendectomy when you were 16 wouldn't help them any. People should always be skeptical about how their government uses it's power, but irrational skepticism gets in the way of having a government that is actually effective.
Does the phrase, "pre-existing conditions" ring any bells?
Like you, I would like to see medical records kept in longitudinal consolidated data-bases and made available to researchers. Like you, I think the benefits would be immense.
But, people who fear that insurance-companies would use this data in order to segment the market and charge more for some people than others (and refuse to insure some people altogether) are not responding to mere hypothetical fears.
So, any proposal like yours would need to be accompanied by radical changes in the insurance market. And it would also need to make sure that Facebook doesn't target my 13 year old because he has a familial predisposition to diabetes or depression or whatever.
I'm glad you are thinking about the best that could result from the intelligent use of medical records. Now think about the worst that could result. Cool! There's a VC fund ready and waiting to fund it.
Insurance companies already know who those people are, though. That information is already available to them because they see, as they must in order to pay for it, every healthcare service and item their members receive. That's why I also say "overblown".
Understand. Though, somewhat perversely, part of what saves us from the worst behavior of the insurance companies is the horrendously backwards state of current medical records.
As well as parts of the ACA. My point is simply that legislation like the ACA's requirement of coverage despite pre-existing conditions will be a precondition of the broader popular acceptance of massive data-pooling of the kind you advocate. The fears are not purely hypothetical (whether they are overblown or not will require careful readings from an inspirometer).
Yes, the records themselves are not always easily standardized (thought, to be fair the electronic ones often are, and it's getting better). But all the important billing data is, for private and Medicare, etc. It's readily and regularly used both by the payors and by researchers.
I guess my point is the data is already pretty pooled, it's not one pool, but it's a lot larger pool than you seem to suspect.
If we did nothing more than require that every insurance claims database be expanded to also pull in the underlying supporting EMR itself, and then portable so a patient info follows the patient from one insurance plan to another, I suspect that would be a significant improvement in the amount of useful research that could be done, that is now limited by the relative thinness of claims data compared to full medical records.
100% agree. Our crappy record keeping is a blessing and a curse as is the crappiness of the entire medical system. It's not sustainable but it's hard to forge a path forward with our current politics. The ACA has made insurance companies significantly less evil and laid the groundwork for better research ("meaningful use" of EMR, for example) but it's hard to take a leap with a constant threat of repeal.
Agree. I'm pro-privacy compared to a lot of people, but the benefit here is of a magnitude to make the tradeoffs worth it IMO, especially given this reality (even if the state of that information is totally shambolic).
I mean, like Allan says, they already know. They know what medicines, tests, and diagnosis codes your physician uses to bill on your care (and your partner's care, and a few dozen million other's care). So if there's a 13 year old on your plan, like millions of other plans, they can infer a lot about them and their likely future health and costs based on what they have now.
This is not really accurate. There's a whole bunch of data somewhere, but it's mostly stored in a non-standardized, often unique format. Yes one person at the insurance company could open one other person's file and read it manually, but data is not standardized & rationalized in the way that Allan imagines- that the insurance company can just query everyone's chart for Big Data
If I'm commenting off topic here based on what you're referring to about I apologize. You mention "chart" so perhaps you are talking about a person's individual medical chart (in an EMR, or paper file somewhere), whereas I was referring to claims data owned and managed by the ins co (or medicare). No we do not yet really have easily portable individual medical charts--though certain EMRs are getting fairly good at linking charts across providers/hospitals etc.
I mean, the companies *absolutely* do run the "big data" research and do modeling/forecasting with their membership. Of course this claims/payment data is highly-guarded proprietary info so they don't readily share it, though they do in fact share it for research with the right controls.
"If everyone's medical records were kept in longitudinal consolidated databases, that can be aggregated, searched with algorithms to spot trends and correlations, etc."
I am highly skeptical that there would be any benefit commensurate to such a large undertaking. Data is only valuable if it's meaningful. It's only meaningful if it's uniform. It's only uniform if there are standards. What you're talking about would require standards for *everything*. We can't even get uniform COVID statistics across state lines.
I think a much more promising avenue would be to start some agency of HHS whose job is to gather meaningful statistics for public consumption. So right now for example if you want COVID case numbers, you can get them for different jurisdictions from data.gov, but the numbers aren't meaningful because they aren't uniform. If people in one state or at one age or one ethnicity are more or less likely than another to take a PCR test, they skew the numbers. Rapid tests skew the numbers. To get meaningful numbers you would want a random survey to get a baseline and then supplement it with wastewater surveillance and PCR test numbers to produce a synthetic derivative. That requires real skill and dedicated effort, not just the New York Times and Johns Hopkins had some crash programs back in 2020 and now they're on autopilot.
One huge benefit - I would be able to look at my own medical records from several decades of life in different states with different doctors. I was only able to get a copy of the x-ray from my broken toe a few years ago because the doctor pretended not to notice when I took a phone picture of his computer screen - it was apparently illegal for him to e-mail me my x-ray because of medical privacy laws. No other medical professional I've worked with has any information about that broken toe.
Even if there were nothing standardized and no statistical pattern to be drawn, just the ability to have this information available to each patient would be huge.
FWIW, I've never been able to get copies of x-rays etc. emailed --- but I've had good success with asking for a copy to be burnt onto a CD or DVD.
(I then import those into my personal medical record archive, because it turns out that having long baselines is useful for figuring out whether something anomalous is new or was there a decade ago. Saved me a whole bunch of testing and worry a few years ago, because I happened to have a copy of results from something unrelated, much earlier.)
I would expect an arms race to develop among insurers/integrated health providers over who can woo consumers with consumer-facing features like who can offer the best, most convenient, most user-friendly portal, apps, etc, to access your own health information in easily digestible summary form as well the ability to see the underlying original provider records, if there was a competitive market for individual health plans (ie, people could choose their own insurance plan rather than having their employer choose one for them based on reasons of interest to the employer rather than the consumer).
With the interoperability rules - common technical standards plus the new prohibition on information blocking by provider to prevent them from refusing to share their records on request, we're closer than we've ever before been to seeing that, but there needs to be a market motive to push it along - a reason for companies to invest in it, and for patients to see a benefit in giving their consent to a private companies to assemble a full, 360, longitudinal set of their records.
Honestly, I suspect the number of people who understand the benefit of long-term record-keeping and for whom that would be a selling point is pretty marginal. :(
You might be right that not enough people perceive a direct individual benefit to themselves. But there is real value and power in assembling an aggregated record set, even if individuals don't immediately perceive a direct benefit to themselves. All the more reason why privacy defaults should not be set to give individuals too much veto power over their records being included in such a database or the ability to hamstring its usefulness.
It would be better to be able to do that, but I think the dirty little secret of EMRs is that no one ever reads them. The more information that is in them, the less the doctors are inclined to wade through it all. If it's relevant, the patient will tell you, "Oh this toe always hurts because I broke it in 2015."
I think the bigger issue is making sure that relevant information goes from doctor A to doctor B, like doctor A knows you're taking medicine X and doctor B wants to give you Y, but neither are aware of the other and the medicines have adverse interactions. That should be able to be caught at the pharmacy if X and Y are always adverse, but if it's only adverse because of your specific condition, it can and does fall through the cracks. Ideally, I think the US system would be restructured away from specialists and toward care coordinators whose jobs would be to manage the wrangling of doctors A and B.
There are definitely standards (ICD-10, RxNorm, LOINC, etc.) and their adoption is now very high, particularly among hospitals and large clinical practices, and the VA/DoD health information systems now align with those standards. This wasn't the case 10 years ago, but a lot has happened since then to make health data meaningful. Even where data is unstructured/non-standard, technologies such as AWS' Comprehend Medical, and other NLPs, can do a lot to bring unstructured and textual notes into a standardized data model. Private and public Health Information Exchanges enable the sharing of these standardized local records to provide more comprehensive longitudinal records of a patient to providers. API standards such as FHIR provide standard ways to query patient data so third party apps can access the data (in theory). Master Patient Indexes (MPIs) are more ubiquitous and are higher performing, and match patient identities across many domains. Syndromic surveillance is in place in most states such that if certain tests or diagnoses are entered into an EHR, that triggers a message to the public health department, and usually includes demographics, and will include results (though sometimes just positive results, not negatives, so the denominator is not always known, which is just plain stupid).
There's still a ways to go, it's not 100%, but the technology seeds are all there, and the comprehensiveness of it is there enough, with some initiative, it could be utilized to help with Covid (and was here and there).
But to Matt's and your point, nothing really happened in a deliberate way at scale. Nobody said "let's utilize all this infrastructure to aggregate useful real time data", and it was really really strange to me. Nobody was empowered to do this. Congress didn't ask anybody to. By and in large, States were seemingly unaware that they could.
Some cities tried (NYC actually had really good data) but higher level coordination and permission was needed.
For example, when most people were vaccinated, they had to provide their name, address, dob, sex, and maybe SSN, phone, email, insurer, etc. This information, along with the type of vaccine you got, was sent to a stated immunizations database. Some states like New York have a statewide master patient index, so it can match your records at any hospital or lab in the state, and in theory, NYSIIS data was available to it. This, in theory, should have given NYS the ability to not just say "x number of people were vaccinated, and x number of people tested positive", but actually say "John Smith of Syracuse, who received a Pfizer shot on 10/10/2021, his second shot on 12/12/2021, tested positive at Onondaga County testing ctr 4 on 3/3/2022, was admitted to Upstate Medical on 3/6, had such and such vitals/diagnoses, and was discharged on 3/9". And yet, this just didn't happen. Why?
I agree that current PHI laws are overindexed on privacy. (My interpretation is that in the 90s prominent people with HIV didn't want to be thought of as gay, and that created the current overly strict rules. In practice, if you know doctors they talk about their patients all the time with only rough anonymity.)
A story I have is that when my 2YO was COVID tested, they didn't release the results to me, her father, because there was some electronic consent that was missing. I basically just called random phone numbers until I got some doctor who pushed a button in the EMR and made it available. I had spent the night before talking to techs whose actual job would have been to release the record, but they all claimed powerlessness. I suspect the doctor just broke the rule because who cares. It was a completely unnecessary waste of time. Obviously, if I get my child tested, I want the result, and in the 21st century, the result needs to get to me via computer. It was pointless bureaucratic delay.
So, it would be good if there was some agency designated to use EMRs and make findings to help public health. I'm just skeptical that you can throw "algorithms" and "AI" at it and get anything useful out. It takes real foresight and insight, not just computers.
Yeah, minor consent for parents is a big issue, as are releasing unreviewed labs to a patient portal. Room for improvement there, but there are some legitimately thorny issues there.
The unreviewed labs thing is really bizarre. My wife is a doctor, and it seems like they just let patients view stuff they have no context for? Seems like a good way to give people panic attacks. There should be some kind of time window, like: Dr. Jones, review this lab and write a note in the next 6 hours, or it will be released without your note.
That's sort of the case now. I forget whether it's 24 or 48 hours, but after that time whether or not a doctor has read or reviewed it the lab results are released to the patient accessible EMR.
In the beginning of the pandemic, NIH issued a few NOSIs (Notice of Special Interest) for proposals on broad questions of SARS-CoV-2, including molecular biology and etiology. These dried up within a couple of months, even before their expiration as the small amount of money allocated was already spent. Now the only NOSIs are around downstream effects of COVID, messaging to underserved populations, and the like - nothing on biology or new treatments: https://grants.nih.gov/grants/guide/COVID-Related.cfm#active
In 2020, I submitted an R21 (two-year grant for exploratory research - $275k) to NIH on one of the SARS-CoV-2 proteins, for which we had a couple of publications already. It wasn't funded with the feedback being "there's already too many people working on this". Totally fair on the merits, but you might have thought in the first year of the pandemic that NIH would have wanted more people turning their research to SARS-CoV-2.
Talking to others, mine was a common experience. A collaborator had already developed a promising small-molecule inhibitor that not only worked in vitro against the virus, it even worked in a small humanized mice trial. They submitted an R01 (typical 5-year NIH research grant - ~$2m) proposal, which was not funded with typical stock criticisms.
Now, I don't want this to come off as a "woe is me" post; this isn't to say that my or my collaborator's proposals should have been funded no questions asked! But it's not just us. Having been on the other side of the equation, reviewing proposals for NIH in the last year, I can say there was no special attention paid to COVID-related proposals (if anything, it was more like an eye-roll at yet another one).
Maybe this is kind of self-serving, but it seems a bit tragic to me that after spending trillions of dollars to manage and mitigate the effects of COVID, we can't shake loose even a few billion for basic research.
Addendum: I have defended NIH before as the best way to decide what projects should be funded, and I still contend it's significantly better than the private grants that a lot of people are fond of. My complaint is that the high-level decision makers (e.g., Congress) didn't really prioritize SARS-CoV-2 research.
I review grants for an institute that offers lots of small pilot grants with seed money to get programs of research up and running. We put out calls, and ran extra sessions to get money out faster, etc. People just submitted whatever they wanted to do anyway with the word "COVID" worked in there somewhere. We did this for the first year. I only ever saw 1 application that was legitimately on COVID. (Which wasn't funded...)
Mice bred to resemble humans in certain key ways. It's a standard in pre-human trial research, though I'm not enough of a biologist to know if there are multiple lines for different types of human equivalency or how similar they actually are.
Two broad types: one is not so much bred but is genetically engineered to possess bits of human DNA. The other type is literally grafting bits of humans onto (or into) the mice, such as human tumors or replacing mice immune cells with human immune cells.
I get the “too many people already working in the field” but they really ought to be prioritizing groups that already have some experience with it and have some promising results.
I don't have numbers in front of me but I feel pretty confident that a majority of NIH grants were directed towards SARS-CoV-2 research. I'm skeptical that they didn't prioritize it. I think it's more likely that they were simply flooded with requests and had to turn a lot of them down. Do you think the NIH should have been given more money or that they aren't doing a good job at picking which research to fund? If it's the latter we would need to look at which grants were approved.
A majority of NIH grants were absolutely not directed towards SARS-CoV-2 research, nor would I expect them to be! It's a nearly $40b agency in a typical year; you don't want everyone to drop everything they're doing just to work on this.
https://covid19.nih.gov/funding It looks like they've spent almost $5b on COVID-related research (over two years), but that covers everything from basic science to trials to vaccine delivery. I don't know want to come off as complaining too much about this, but I imagine most people would have assumed like you that there was a huge pivot to COVID when it really wasn't the case at all.
"the government agencies whose job is to hand out the grants are so ossified that it’s hard to wish they had billions more to play with"
FWIW this seems to be true across the agencies, not restricted to the healthcare related shops. You've written a lot about the need for the agencies to be more pragmatic and nimble, less process- and precedent-bound. I would very much love to see a post on how they could fix this, on a practical level.
Someone needs to do a really good longread about Jim Bridenstine's time at NASA, because NASA is one of the most ossified and non-nimble agencies out there (as Nelson is returning it to being), and Bridenstine managed to get it to operate things like the commercial crew program and the new lunar lander process in completely different ways, which is why the US has a human-launch system again, and why it's believable that Artemis might actually get to the moon as it wasn't before Bridenstine.
I think this gives a little too much credit to Bridenstine. The commercial crew program dates to the Obama administration, while its predecessor, the commercial orbital transportation system program, dates to the Bush admin
One of the real pleasant surprises from the Trump years was how capable and committed Bridenstine was as NASA administrator. He really focused on the agency's core mission and worked well with people, speaking as an outside scientist through that time. But Theme Arrow is also right that the previous administrations really started the cultural and program changes that he carried forward.
I'm curious to learn more about the operations of these agencies. I will actually be starting a position at NASA HQ in a month, moving from academia. This is in the science directorate, not the human exploration directorate, but it will be interesting to learn how one large legacy bureaucracy (NASA) compares to another medium-sized legacy bureaucracy (private research university).
On the original comment, about being reluctant to trust these agencies with grant money to distribute, count me among those who things they do a pretty good job. It is certainly not perfect, and there is a healthy debate about the balance between funding useful incremental work versus riskier transformative work, but that is really a high level strategic decision by the agency leadership and the congressional committees. So I don't think they do a bad job of awarding taxpayer money in research grants.
This sounds interesting but I also feel like the answer here has to be something other than just good leadership? Good leadership is obviously very important! But it's also hard to find (or basically luck?) and not really replicable or durable. What would legislation or agency replacement or something larger scale and more permanent actually look like in practice??
Honestly.....it can hurt a lot. Partially just because potential grantees are trying to think of projects that will get funded, which is not necessarily the same thing as a "good" project depending on how the funding works. The agencies should be able to fund work that is actually effective!
Also many of these agencies have a variety of functions, including funding but also regulation and consensus development or leadership on standards, and the results of funded projects play into the other functions.
I don't think it is necessarily true that rigorous process is a great way to fight bias. I think as often as not, processes are used to launder bias, making a biased decision appear legitimate because, hey look we're just following The Process.
I don't think there's actually a substitute for getting good people in place making smart decisions on a case-by-case basis.
I think exactly the challenge here is figuring out how to have good results with minimal downside risks, yes. It's really hard! But it's also the case that the processes we have don't really protect us from an outcome like "pet projects get funded" in part because some of those precesses are so hard to navigate that only pet projects are able to do it.
When you're administering a grant program where 5% of applicants get the grants, then every page of "process" paperwork that applicants have to do is multiplied by 20.
Pharma scientist here, who bridges R&D and CMC functions.
In addition to insisting upon a proven benefit for any drug product or treatment, the FDA is all about investigating and controlling risk. This is good.
Test: 1) Think about all the times the FDA got it right. 2) Now think about all the times the FDA missed something.
You probably scratched your head the first time I asked you to think, and you could come up with some examples for the second time, all of which make you skeptical about “big pharma” and the utility of the regulations in place to protect consumers.
What’s going to happen if the FDA misses something, and a drug product that is not efficacious, or a drug product that is unsafe, reaches the market? The FDA will require more proof of safety or efficacy in the future. The regulatory burden increases, the complexity of the studies needed to show safety and efficacy increases, the cost increases. More risks to patients might be caught before the product rolls out (that is good!) but now we have built a very slow and expensive apparatus for making sure we catch those risks in advance. Many incentives push toward MORE complexity, more testing, and slower rollout.
I’m not sure how to undo that, or if we should try. I like the idea of taking compounds that are proven safe and testing their efficacy for other indications — you could lop off a whole host of safety requirements with such an investigation. Agree with Matt Y that the government would be the ideal funder of such studies.
I think there’s a broader comment to be made here about risk tolerance. How much are you willing to do to drive down the risk of something bad happening, and how much does it cost?
We spent the pandemic years disagreeing with each other about that. But it’s something I’m noticing more and more — contemporary society accepts far less risk than it did just a couple of decades ago, and part of the price we pay here is in speed and agility.
And yet, by accepting "far less risk" on medical research and drug approvals, we have in fact taken on far more risk that we won't be able to address new issues like COVID. Moving slow on COVID almost certainly killed more people than doing faster research would have - a myopic focus on *risks to study participants* ignores the much larger *risks to the general population*.
I don't have an in-principle objection to 'n of 1' self-experimentation with active pharmaceuticals, although for patients with terminal disease I'd hope they would enroll in clinical trials so their experimental treatments can also serve the broader public interest and advance our knowledge base. I do think very little 'n of 1' experimentation has effect sizes that exceeds what you'd see from placebo, but the placebo effect is great! Especially if you couple it with improvement in diet, exercise, and sleep hygiene.
The problem comes from the opportunistic snake oil peddlers who deliberately obfuscate data on efficacy in order to sell their preferred brand of Placebo Plus. The information landscape in biomedicine is already incredibly difficult for consumers to navigate. I'm all for more efficient regulatory approval processes, but nervous about anything that further decreases the signal-to-noise ratio when it comes to clinical data.
There's a missing third category: Think about all the times the FDA should have approved something but didn't. That has big costs too (including in deaths), for all the patients who could have been helped but weren't. "Risk" is not all on one side.
The FDA process delays time to market for treatments, but I'm hard pressed to think of examples where they have just flatly denied approval for something efficacious.
The issue here is one of base rates. Most pharmaceutical compounds don't alter meaningful clinical end points. The few that do work tend to have small-to-moderate effect sizes. Your regulatory process should be optimized for that underlying reality. I don't think the consumer would be well served by flooding the market with placebos in order to increase time to market by 1-2 years for the handful of drugs that actually work. We've seen with COVID-19 how easy it was for (very smart people!) to fool themselves over and over with shoddy clinical data on experimental treatments. Now imagine that reality, but for every disease.
"The FDA process delays time to market for treatments, but I'm hard pressed to think of examples where they have just flatly denied approval for something efficacious."
I think the more likely result is that because the process is so slow and expensive, you have many ideas that are not attempted.
I think some libertarians used to give beta blockers as an example. They've been approved for a long time now, obviously, but tens of thousands of people died from preventable cardiovascular events while the FDA was studying their safety and efficacy.
The quality of research now is much better than in the past and we've actually developed an ethical framework. I think a lot of our regulatory burden is fighting yesterday's battles.
But I agree that the public's perception is still based around the failure of the old models. Maybe it will take seeing the failure of the current "old" models to evolve (until the next failure).
I'm retired from the biopharma industry and put out a daily newsletter on COVID during most of 2020, stopping when the first vaccines were authorized for experimental use late in the year. I was equally frustrated by the slow pace at NIH and they should have done more to harness the vast clinical trial networks in the US. Most of the early intervention trials were quickly designed and up and running in the UK (first to show the benefits of steroid treatment). NIH had experience in both the HIV/AIDs and cancer areas where they set up large multi-center networks some of which are still up and running (I wrote up a white paper on how this could be adapted to COVID research). This has to go down as a failure even though there are NIH efforts that are ongoing right now.
Observational trials are also incredibly useful. I was part of the industry group that developed and funded an exploratory approach to the use of observational data from medical records to look at both drug safety and efficacy. We involved FDA in the planning of this and, not wanting to get into all the gory details, it successfully morphed into a large multi-national group. The Observational Health Data Sciences and Informatics (or OHDSI, pronounced "Odyssey") program is a multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. They mobilized quite early in the pandemic and set up an number of good research protocols to look at a variety of interventions. As commentator Allan Thoen notes below, these kinds of research protocols can be useful in ongoing work.
I don't know whether one can put up web links in the comments section. there is a very good YouTube video up on the TOGETHER Trial by 'Biotech and Bioinformatics with Prof Greg' Google will be your friend if you want to watch it. It's a good description of new trial designs. Such designs are being used by industry and Matt should have been more clear on this.
As someone who has done clinical research, there really is a huge burden on individual scientists. We have to secure funding, develop protocols, get it all through IRB, do enrollment, write and submit to journals in hopes of being accepted for publication, etc. Just tons of veto points. Industry-sponsored research is easier because the company helps do much of this stuff even if the only direct financial support is in-kind (i.e., not making you pay for their trial product). I think we have over learned the ethical failures of past research by creating a vetocracy that's creating its own potential harm. (Sound familiar?)
There really should be projects that are "nationally approved" so that individual clinicians don't have to deal with the local administrative barriers. Just need to have a patient sign a premade consent form, the doc follows the protocol, and data is directly accessed by the central agency (or delegated research team) via the EMR.
Most patients are treated without being OFFERED to participate in research, much less a prospective study, so enrollment takes much longer or we just end up with a bunch of underpowered pilot studies that never develop further. I want to actually participate in more research but it's just too hard to get off the ground, especially since it may not even be publishable and our clinical load is perpetually at burnout levels.
I’ve spent 20 years working on how to get medical evidence into practice faster and I think you are understating the impact of the Republican party’s decision to adopt anti-science/anti-public health as a core principle. Before 2009, this was something that could be openly discussed. Now it’s confined to the Democratic Party and is seen as too wonky and risky to spend a lot of political capital on. If Democrats had enough power to push through multiple objectives, I think we could do something about the ossification fairly quickly. But not as long as every attempt at public health is blocked and turned into an attack by Republicans.
What are the most promising processes you've seen to get medical evidence into practice faster which have been stymied by Republicans?
More generally, what made 2009 a threshold year? I would have said this has been trending downward for far longer, but was there some kind of cliff that year?
After initially describing COVID as a hoax, the Republican Party extensively opposed miraculous vaccines for the worst pandemic in a century while hawking endless quack medicine cures.
As to 2009: death panels.
Palin "charged that proposed legislation would create a "death panel" of bureaucrats who would carry out triage, i.e. decide whether Americans—such as her elderly parents, or children with Down syndrome—were "worthy of medical care""
"Palin's claim was reported as false and criticized by the press, fact-checkers, academics, physicians, Democrats, and some Republicans."
Thanks for sharing! It makes sense to me that both of those impacted public opinion in very negative ways. I wouldn't have thought it had as much impact on the internal working of health agencies, but the pressure must have a negative impact there as well.
I didn't care for the recent prescription drug cap policy because I think we should just institute a QALY system similar to the NHS. The labeling of CER as death panels might have made that politically impossible for a while which is just tragic.
I mean, I find it pretty curious that you asked for examples. Fauci has been getting death threats, De Santis calling for his imprisonment, and someone was actually arrested supposedly on their way to attempt to kill him. There has also been extensive state level legislation to inhibit public health efforts.
I'd've thought it's very obvious it's a bad environment for public health efforts.
Much of the Republican response to the pandemic has been terrible, but I was viewing agency ossification as a separate problem and didn't see the overlap. The things you highlight point to me how the broader public discussion can lead to agencies being very defensive which would exacerbate the ossification. I'm still mildly doubtful that if the Republicans had the same response as Democrats the agencies would have performed much better - though I hold that opinion very loosely and its likely influenced by how badly I think they did overall.
The death panel thing is a separate also tragic part of American politics where parties will use something that brings short term political gains but long term costs to the public.
Binya nails it- in 2008, Newt Gingrich was a vocal proponent of comparative effectiveness research. In 2009, Republicans were pointing to CER as death panels. Under Trump, evidence and expertise was branded as automatically suspect and policy was openly based on lies.
Probably a third of the discussions that take place here boil down to “state capacity is important and it’s near-impossible to cultivate when one side refuses to.”
Sucks, but there’s nothing to be done about it at the federal level. Get back to me when MN, CO, or VA is willing to do some trailblazing.
I wish it were universally true that only one side refuses. But “plow money into inefficient-to-useless patronage mills” emphatically does not count as building state capacity, and at the state and local levels that appears to be the primary concern of the democratic party in the areas where they face no serious competition at the ballot.
One of the things that makes long Covid so difficult to study right now is that we don’t really have a good case definition which makes it difficult to recruit patients. There was one good quality study that found that the only symptom that was more common in people who had had the virus vs those who had not was loss of smell. That study was probably not large enough to detect rarer syndromes like the post viral syndromes we have with other viruses an almost certainly occur with this virus. Still, it suggests that the estimates that 10% of Covid cases lead to long Covid are a massive overestimate. I’d say that’s actually a good thing.
One of the problems with a lack of high-quality science is that low quality science ends up being used when estimating the incidence of stuff like long Covid. Some of the studies with high estimates of long Covid don’t have a control group, which is really important because the pandemic had massively disrupted the lives of everyone and that will affect everyone’s health status. Others do not confirm that the people in the study have actual evidence of Covid infection (positive pcr test, antibodies, T cell response) in order for them to be included in the long Covid group. And the NIH should be demanding that the science it funds is high quality because otherwise we waste time on stuff that has little chance of working.
Unfortunately I suspect long Covid has been too politicized in the US to ever get useful data. Honestly though, the US is bad at this type of research anyway so I have a lot more faith in some European country to get valuable data at some point. And if no European country considers it worth researching, that’s a strong argument against its being a significant problem beyond what we already know about usual post-viral syndromes that we already know exist
I am 100% on board with the sentiment that the clinical trial could and should have been done faster and it is a tragedy that more hasn't been done.
I think there are a few extra facts that might add some nuance to the situation:
1) The NIH did in fact have around 6 (non-vaccine) platform trials as part of its ACTIV trial research program (and associated ACTT trials). But they are largely too small and too slow.
The best known output of this is in the ACTIV-2 trial (outpatients) which by itself tested around half a dozen monoclonal antibodies under a common protocol so that results could be compared and the trials could be supported by NIH.
The biggest disappointment is the ACTIV-1 trial (hospitalised patients). Basically all the hospitalised patient recommendations come from the UK RECOVERY trial which has randomised ~44,000 patients and figured out dexamethosone helped by mid-2020. ACTIV-1 has (I think?) randomised around 3,000 which may be quite underpowered to detect small but important impacts on mortality.
Most hope is for the ACTIV-6 trial, which is the US govt trying out a structure like the ones you've promoted here. Started way too late, but they actually have 2 different doses of ivermectin (probably won't work but good to really put a nail in this coffin) and also fluticasone and fluvoxamine (much better chance of working and are cheap).
2) The PANORAMIC trial in the UK may be the fasting enrolling therapeutic trial in recent history and has a jaw-dropping ~22,000 patients randomised to Merck's antiviral (molnupiravir) and has done this over the past 4-5 months. This is already around 10 times the size of Merck's original controversial trial! It could give a much cleaner answer than the original trial did. Interestingly, the trial protocol includes a pre-specified 'economic analysis' as well to basically figure out if giving these antivirals to less at risk (50s and vaccinated) people is worth it to the NHS.
Our ability to reduce mortality is hugely hampered by the slow recruitment of these trials and I hope someone who can do something about it reads your piece and acts.
As an aside, I was offered participation in ACTIV 6 when I was diagnosed with covid in Jan. However, I had a mild case that was improving by the time I was contacted a day after the positive result, which was a few days into symptoms (probably omicron, 3x vaccinated) and my personal cost-benefit assessment was that no personal medication side effects were worth taking a new medicine at that point, so I passed.
The fact that the NIH has done poorly with their Long COVID study is depressing but unsurprising. Government institutions serve a particular function well, but speed is not their focus (unlike pharma/biotech where every single day a drug is not on the market is costing them money and have vast experience with setting up and running clinical trials). This, unfortunately, is a problem without a simple solution. Who stands to benefit with Long COVID research? We all do, but not one particular company, so there's no profit motive to go fast.
Related to this is how the vaccines were actually developed -- there was a clear need for something to be developed quickly, so hundreds of companies started working on it. Most of them failed! But they were willing to devote the resources to doing so *because of the profit motive at the other end.* Albert Bourla (CEO of Pfizer) received a ton of praise for refusing the government's money to develop their vaccine... but he knew that the payoff would be huge on the other end so it was a worthwhile investment. Pre-pandemic, many companies had small teams (if any) working on things like novel coronavirus drugs/antivirals. A lot of them simply decided it wasn't worth it since there was no guarantee of a buyer on the back end. The system worked because governments were (rightly) willing to pay for the drugs if they were successful.
"What’s more, Pfizer already had a lead in hand. In 2003, researchers at the company developed an antiviral, known as PF-00835231, that could block the main protease of a coronavirus that emerged in 2002 and causes severe acute respiratory syndrome (SARS). But by the time they were ready to test it in patients, the SARS outbreak had been contained. PF-00835231 is structurally similar to a peptide that binds within SARS’s main protease. That binding site in SARS is identical to the one in SARS-CoV-2, so Pfizer researchers thought the molecule could work against the new virus. Tests showed they were right."
Stuff like this makes me grateful for billionaire philanthropy, and pushes me towards libertarianism. Especially depressing: no other government in the world funds these studies either. Why are all governments broken in the same way? (I am genuinely confused)
Maybe it's not good that the government does all the science funding. It would be an interesting experiment to have some private entities allocate some of the grant money and see what they do differently (e.g. from what I hear, https://fastgrants.org/ has done a good job with money from private philanthropy).
The US is by far the best resourced government in the world. No other country comes close to the combination of scale and wealth. People miss this because China is so big, but GDP/capita in China is 1/6 of America's at market rates and 1/3 at PPP. They've got to prioritise delivering things that most Americans take for granted. It's very unfortunate for humanity that the best resourced government is so dysfunctional.
Perhaps we need to face the reality that government is so broke in this country that as many resources as possible need to be routed to private enterprise.
And we need more eccentric billionaires and more effective altruism.
That's what we do best. Let Europe take the lead on innovation in governance.
Do you mean "broke" or "broken"? Broken, certainly, and that's the point Matt is making. But there's no scarcity of resources at the federal level; it's just a question of deploying them effectively (or at all).
Even in a country like the US, where the federal government only pays a fraction of total healthcare costs, medical research is a very clear example of something that pays for itself. Even if you have to borrow now to do it, you'll be borrowing a lot less down the road to pay for Medicare and Medicaid. There's no resource constraint at all here.
“Let Europe take the lead on innovation in governance”
Europe’s “innovation” in governance gave them, for example, laws that make insults criminal offenses and murderous, radical islamists in their midst. No thanks.
There was never any good reason to expect ivermectin to have any impact on Covid and it seemed wrong from the start so a trial investigating its efficacy was a waste of time and money. Same with hydroxychloroquine, lopinavir, etc.
There was no clear mechanism of action, no clear theory of why these drugs would be effective, and the people promoting these drugs to the public were scammers and quacks (and ended up running online businesses to prescribe and sell these cheap genetics at huge markups).
Fluvoxamine is a notable exception. Or is it? The suggestion came not from delicensed physicians and podcasters on Facebook and Twitter but from researchers already investigating the drug’s other uses and had plausible mechanisms of action, namely the anti-inflammatory effects through serotonin transporter inhibition and the sigma-1 receptor agonist effects blocking COVID’s use of those receptors. And, indeed, other SSRIs seem to hold similar benefits because they function similarly although research is ongoing.
Do we need a clinical trial (even a fast one) every time someone suggests anything? Are there not ways to evaluate the quality and possible efficacy of a suggested treatment before engaging in the scientific process? Or will we need human challenge trials to prove injecting people with bleach is ineffective?
These alternative treatments were never about finding a better treatment for Covid. They were about ideology and they were about a few opportunists making a quick buck off a desperate and fearful population.
If a lot of people are interested in a treatment, that seems like a good reason to investigate it, even if objectively it isn’t likely to work. You can stop people from getting scammed!
If you want more trust in public health, seems like being responsive to the actual questions the actual people you serve have, rather than the questions you wish they had, would be wise.
For every study you do, you don't do another one. Even if the financing comes from private sources, the resources for reviewing the trial (i.e., FDA personnel) are limited.
It’s absurd to say the US federal government - the richest organization that has ever existed - does not have the resources to study this pressing question of broad public interest. What we need is more studies - especially on things people actually care about - not more careful selection.
Proponents of ivermectin are encouraging parents to give it to their sick children via discord, whatsapp, and telegram chat groups. People are being caught trying to smuggle ivermectin into hospitals to give to sick relatives when the hospital won't. A crank lawyer in Wisconsin is pursuing a legal campaign against a hospital system because, she argues, every patient who dies could have been saved by ivermectin and is, therefore, a homicide.
Nothing that contradicts these people's worldview is going to matter. I don't care how well designed the trial is, how well managed the data analysis, or how credible the institutions and backers. The people interested in this treatment are not interested because they think it works. They are interested in it precisely because our medical and research institutions say it does not work.
[additional thought added as edit] One way to think about this is that you can't "umm, well, actually..." people into trusting public health. Throwing "the science" in their face isn't persuading the, I dunno, 30% of the country that is deeply skeptical of government, education, business, etc. Do you think Florida's surgeon general is going to change his stance and recommend vaccines for kids (not mandate, just recommend!) if there is a new well done clinical trial showing once again that the vaccines are safe for kids? No. He's going to continue recommending against vaccinations because the efficacy and safety and protection for kids against disease don't matter.
This reasoning does not make any sense. I'm sure it's true that the most extreme people aren't persuaded by the new evidence. But that doesn't speak to whether *anyone* is persuaded by the new evidence.
On any issue, most people are not extreme partisans. Probably most people who are interested in Ivermectin heard about the promising early studies, are suspicious that Big Pharma isn't willing to follow up because it wouldn't be profitable (true!), and would be grateful to hear that someone did follow up and it turned out not to work.
I could see that and am open to the idea that I'm overestimating the number of extremists.
Yet, I'm not sure where to draw the line on what to investigate. Lots of people believe in astrology, healing crystals, the power of prayer, zinc and vitamin D as cure-alls, and the infamous bleach injections. At some point, there has to be a threshold for "we study this because it may make a positive difference in treatment or public perceptions" vs "we study everything anyone wants us to study no matter how zany and unlikely". Maybe, given the public health implications, ivermectin research was worth it. I'm not so sure but I'm willing to entertain that I'm wrong here. But the principle "study things that people are interested in" has to have limits or it's just as big a waste of time as an arduous and inefficient FDA process.
Honestly I’m on the fence about the ivermectin question. I think there was virtually no chance of it being effective — I think the in vitro effect required concentrations that would have been dangerous in a human. On the other hand, if it is being used there is a good argument to test it so that we can say “these are the results”. If it’s mostly political then there are plenty of people who will not be convinced. Hopefully if you do it early enough, you would have results before if becomes part of the political alliegance.
I guess what I’m saying is that when it comes to clinical trials we might need to remember the public health maxim “meet people where they are”. Ideally we wouldn’t have to trial low probability drugs but our world is far from ideal right now.
Regarding the footnote about the FDA requiring proof of efficacy... part of the reason for this is that FDA approvals are based on a relative assessment of benefits (efficacy) vs. harms (safety).
Safety is not a simple binary where all FDA-approved products are proven "safe." The FDA has signed off on the safety of, say, the OTC allergy medications (Claritin, Zyrtec, etc.) and also of cytotoxic chemotherapies. In an absolute sense, the former are obviously MUCH safer than the latter, and, indeed, in most medical settings outside of cancer, cytotoxic chemotherapies would be considered unsafe.
I don't think the idea that the FDA should switch to "safety-only" regulation is crazy, but it would be complicated. Maybe the FDA would need to rate safety on a sliding scale with 1 being "safe enough to market as an OTC allergey med" and 10 being "unsafe in most contexts buy accepable as a cytotoxic chemotherapy."
In theory, individual physicians are supposed to be expert enough to make a customized risk-benefit calculation for each individual patient when they decide to prescribe a particular drug to that patient. Physicians are generally allowed to prescribe anything that's legally available for any reason they in their personal professional opinion believe will, net-net, improve the patient's health, whether it's an off-label use of an FDA approved drug or a legal drug that hasn't been reviewed by the FDA at all, and the main legal limits on what they can prescribe are tort malpractice standards (that's why there's a "learned intermediary" doctrine that protects drug manufacturers from liability for prescribing decisions of a physician that result in the patient being harmed by a drug, as long as the drug company disclosed known risks in the product label or the physician independently knew the risks).
But as expert as physician might be, and as much as they sometimes chafe at evidence-based prescribing standards, practice has shown that they, like all humans, are subject to making irrational, non-evidence-based decisions (witness the number of docs pushing quack covid cures).
So there needs to be some evidence-based expert oversight body. Currently that role is filled by FDA and insurance plan formulary and drug utilization review committee. Maybe someday we'll get to the point where FDA can drop back and the private sector can fill that role entirely, but we're not there now, and I think it would require considerably more consolidation on the payor/provider side of the industry for that to work, because currently payors and providers don't have the capacity to fully replicate what FDA does.
Bioethicist are to important medical research what environmentalists/housing advocates are to new home construction? They are so caught up in their own bull$hit that they end up doing more harm than good?
And while there’s most certainly a need for them they need less power and less control of policy.
You know what would go a long way towards answering a lot of these questions, like long covid and more? If everyone's medical records were kept in longitudinal consolidated databases, that can be aggregated, searched with algorithms to spot trends and correlations, etc.
Why don't we have that? In significant part because of overblown privacy fears that get the cost-benefit analysis wrong, loading up the cost side with vague, theoretical, hypothetical fears and dismissing likely concrete benefits.
The government is too hobbled and bureaucratic to get it done, though the recent interoperability rules for medical records are a big step towards making it possible for private companies to do it. Since the government can't/won't do this itself, it should get out of the way and focus on removing barriers and helping private companies do it responsibly, such insurance companies and vertically integrated IDNs. There's an enormous public health payoff if this can get done.
privacy activists are my personal enemies. especially if they're "liberals". destructive little crypto-libertarians who hate the idea of collective good requiring even a little bit of individual sacrifice
I legit can’t tell if this is sarcastic or real, which concerns me.
Pretty sure it's real and I concur. If the government wanted to arrest political dissidents or do other tyrannical things, it could easily do so with powers that the FBI and NSA already have. Knowing that you had an appendectomy when you were 16 wouldn't help them any. People should always be skeptical about how their government uses it's power, but irrational skepticism gets in the way of having a government that is actually effective.
"... vague, theoretical, hypothetical fears..."
Does the phrase, "pre-existing conditions" ring any bells?
Like you, I would like to see medical records kept in longitudinal consolidated data-bases and made available to researchers. Like you, I think the benefits would be immense.
But, people who fear that insurance-companies would use this data in order to segment the market and charge more for some people than others (and refuse to insure some people altogether) are not responding to mere hypothetical fears.
So, any proposal like yours would need to be accompanied by radical changes in the insurance market. And it would also need to make sure that Facebook doesn't target my 13 year old because he has a familial predisposition to diabetes or depression or whatever.
I'm glad you are thinking about the best that could result from the intelligent use of medical records. Now think about the worst that could result. Cool! There's a VC fund ready and waiting to fund it.
Insurance companies already know who those people are, though. That information is already available to them because they see, as they must in order to pay for it, every healthcare service and item their members receive. That's why I also say "overblown".
Understand. Though, somewhat perversely, part of what saves us from the worst behavior of the insurance companies is the horrendously backwards state of current medical records.
As well as parts of the ACA. My point is simply that legislation like the ACA's requirement of coverage despite pre-existing conditions will be a precondition of the broader popular acceptance of massive data-pooling of the kind you advocate. The fears are not purely hypothetical (whether they are overblown or not will require careful readings from an inspirometer).
Yes, the records themselves are not always easily standardized (thought, to be fair the electronic ones often are, and it's getting better). But all the important billing data is, for private and Medicare, etc. It's readily and regularly used both by the payors and by researchers.
I guess my point is the data is already pretty pooled, it's not one pool, but it's a lot larger pool than you seem to suspect.
If we did nothing more than require that every insurance claims database be expanded to also pull in the underlying supporting EMR itself, and then portable so a patient info follows the patient from one insurance plan to another, I suspect that would be a significant improvement in the amount of useful research that could be done, that is now limited by the relative thinness of claims data compared to full medical records.
100% agree. Our crappy record keeping is a blessing and a curse as is the crappiness of the entire medical system. It's not sustainable but it's hard to forge a path forward with our current politics. The ACA has made insurance companies significantly less evil and laid the groundwork for better research ("meaningful use" of EMR, for example) but it's hard to take a leap with a constant threat of repeal.
Agree. I'm pro-privacy compared to a lot of people, but the benefit here is of a magnitude to make the tradeoffs worth it IMO, especially given this reality (even if the state of that information is totally shambolic).
I mean, like Allan says, they already know. They know what medicines, tests, and diagnosis codes your physician uses to bill on your care (and your partner's care, and a few dozen million other's care). So if there's a 13 year old on your plan, like millions of other plans, they can infer a lot about them and their likely future health and costs based on what they have now.
This is not really accurate. There's a whole bunch of data somewhere, but it's mostly stored in a non-standardized, often unique format. Yes one person at the insurance company could open one other person's file and read it manually, but data is not standardized & rationalized in the way that Allan imagines- that the insurance company can just query everyone's chart for Big Data
If I'm commenting off topic here based on what you're referring to about I apologize. You mention "chart" so perhaps you are talking about a person's individual medical chart (in an EMR, or paper file somewhere), whereas I was referring to claims data owned and managed by the ins co (or medicare). No we do not yet really have easily portable individual medical charts--though certain EMRs are getting fairly good at linking charts across providers/hospitals etc.
I mean, the companies *absolutely* do run the "big data" research and do modeling/forecasting with their membership. Of course this claims/payment data is highly-guarded proprietary info so they don't readily share it, though they do in fact share it for research with the right controls.
Isn't that kind of market segmentation illegal under the ACA? https://www.healthcare.gov/how-plans-set-your-premiums/
"If everyone's medical records were kept in longitudinal consolidated databases, that can be aggregated, searched with algorithms to spot trends and correlations, etc."
I am highly skeptical that there would be any benefit commensurate to such a large undertaking. Data is only valuable if it's meaningful. It's only meaningful if it's uniform. It's only uniform if there are standards. What you're talking about would require standards for *everything*. We can't even get uniform COVID statistics across state lines.
I think a much more promising avenue would be to start some agency of HHS whose job is to gather meaningful statistics for public consumption. So right now for example if you want COVID case numbers, you can get them for different jurisdictions from data.gov, but the numbers aren't meaningful because they aren't uniform. If people in one state or at one age or one ethnicity are more or less likely than another to take a PCR test, they skew the numbers. Rapid tests skew the numbers. To get meaningful numbers you would want a random survey to get a baseline and then supplement it with wastewater surveillance and PCR test numbers to produce a synthetic derivative. That requires real skill and dedicated effort, not just the New York Times and Johns Hopkins had some crash programs back in 2020 and now they're on autopilot.
One huge benefit - I would be able to look at my own medical records from several decades of life in different states with different doctors. I was only able to get a copy of the x-ray from my broken toe a few years ago because the doctor pretended not to notice when I took a phone picture of his computer screen - it was apparently illegal for him to e-mail me my x-ray because of medical privacy laws. No other medical professional I've worked with has any information about that broken toe.
Even if there were nothing standardized and no statistical pattern to be drawn, just the ability to have this information available to each patient would be huge.
FWIW, I've never been able to get copies of x-rays etc. emailed --- but I've had good success with asking for a copy to be burnt onto a CD or DVD.
(I then import those into my personal medical record archive, because it turns out that having long baselines is useful for figuring out whether something anomalous is new or was there a decade ago. Saved me a whole bunch of testing and worry a few years ago, because I happened to have a copy of results from something unrelated, much earlier.)
I would expect an arms race to develop among insurers/integrated health providers over who can woo consumers with consumer-facing features like who can offer the best, most convenient, most user-friendly portal, apps, etc, to access your own health information in easily digestible summary form as well the ability to see the underlying original provider records, if there was a competitive market for individual health plans (ie, people could choose their own insurance plan rather than having their employer choose one for them based on reasons of interest to the employer rather than the consumer).
With the interoperability rules - common technical standards plus the new prohibition on information blocking by provider to prevent them from refusing to share their records on request, we're closer than we've ever before been to seeing that, but there needs to be a market motive to push it along - a reason for companies to invest in it, and for patients to see a benefit in giving their consent to a private companies to assemble a full, 360, longitudinal set of their records.
Honestly, I suspect the number of people who understand the benefit of long-term record-keeping and for whom that would be a selling point is pretty marginal. :(
You might be right that not enough people perceive a direct individual benefit to themselves. But there is real value and power in assembling an aggregated record set, even if individuals don't immediately perceive a direct benefit to themselves. All the more reason why privacy defaults should not be set to give individuals too much veto power over their records being included in such a database or the ability to hamstring its usefulness.
It would be better to be able to do that, but I think the dirty little secret of EMRs is that no one ever reads them. The more information that is in them, the less the doctors are inclined to wade through it all. If it's relevant, the patient will tell you, "Oh this toe always hurts because I broke it in 2015."
I think the bigger issue is making sure that relevant information goes from doctor A to doctor B, like doctor A knows you're taking medicine X and doctor B wants to give you Y, but neither are aware of the other and the medicines have adverse interactions. That should be able to be caught at the pharmacy if X and Y are always adverse, but if it's only adverse because of your specific condition, it can and does fall through the cracks. Ideally, I think the US system would be restructured away from specialists and toward care coordinators whose jobs would be to manage the wrangling of doctors A and B.
There are definitely standards (ICD-10, RxNorm, LOINC, etc.) and their adoption is now very high, particularly among hospitals and large clinical practices, and the VA/DoD health information systems now align with those standards. This wasn't the case 10 years ago, but a lot has happened since then to make health data meaningful. Even where data is unstructured/non-standard, technologies such as AWS' Comprehend Medical, and other NLPs, can do a lot to bring unstructured and textual notes into a standardized data model. Private and public Health Information Exchanges enable the sharing of these standardized local records to provide more comprehensive longitudinal records of a patient to providers. API standards such as FHIR provide standard ways to query patient data so third party apps can access the data (in theory). Master Patient Indexes (MPIs) are more ubiquitous and are higher performing, and match patient identities across many domains. Syndromic surveillance is in place in most states such that if certain tests or diagnoses are entered into an EHR, that triggers a message to the public health department, and usually includes demographics, and will include results (though sometimes just positive results, not negatives, so the denominator is not always known, which is just plain stupid).
There's still a ways to go, it's not 100%, but the technology seeds are all there, and the comprehensiveness of it is there enough, with some initiative, it could be utilized to help with Covid (and was here and there).
But to Matt's and your point, nothing really happened in a deliberate way at scale. Nobody said "let's utilize all this infrastructure to aggregate useful real time data", and it was really really strange to me. Nobody was empowered to do this. Congress didn't ask anybody to. By and in large, States were seemingly unaware that they could.
Some cities tried (NYC actually had really good data) but higher level coordination and permission was needed.
For example, when most people were vaccinated, they had to provide their name, address, dob, sex, and maybe SSN, phone, email, insurer, etc. This information, along with the type of vaccine you got, was sent to a stated immunizations database. Some states like New York have a statewide master patient index, so it can match your records at any hospital or lab in the state, and in theory, NYSIIS data was available to it. This, in theory, should have given NYS the ability to not just say "x number of people were vaccinated, and x number of people tested positive", but actually say "John Smith of Syracuse, who received a Pfizer shot on 10/10/2021, his second shot on 12/12/2021, tested positive at Onondaga County testing ctr 4 on 3/3/2022, was admitted to Upstate Medical on 3/6, had such and such vitals/diagnoses, and was discharged on 3/9". And yet, this just didn't happen. Why?
I agree that current PHI laws are overindexed on privacy. (My interpretation is that in the 90s prominent people with HIV didn't want to be thought of as gay, and that created the current overly strict rules. In practice, if you know doctors they talk about their patients all the time with only rough anonymity.)
A story I have is that when my 2YO was COVID tested, they didn't release the results to me, her father, because there was some electronic consent that was missing. I basically just called random phone numbers until I got some doctor who pushed a button in the EMR and made it available. I had spent the night before talking to techs whose actual job would have been to release the record, but they all claimed powerlessness. I suspect the doctor just broke the rule because who cares. It was a completely unnecessary waste of time. Obviously, if I get my child tested, I want the result, and in the 21st century, the result needs to get to me via computer. It was pointless bureaucratic delay.
So, it would be good if there was some agency designated to use EMRs and make findings to help public health. I'm just skeptical that you can throw "algorithms" and "AI" at it and get anything useful out. It takes real foresight and insight, not just computers.
Yeah, minor consent for parents is a big issue, as are releasing unreviewed labs to a patient portal. Room for improvement there, but there are some legitimately thorny issues there.
The unreviewed labs thing is really bizarre. My wife is a doctor, and it seems like they just let patients view stuff they have no context for? Seems like a good way to give people panic attacks. There should be some kind of time window, like: Dr. Jones, review this lab and write a note in the next 6 hours, or it will be released without your note.
That's sort of the case now. I forget whether it's 24 or 48 hours, but after that time whether or not a doctor has read or reviewed it the lab results are released to the patient accessible EMR.
I should add, obviously, nobody would see or report data at this level of identifiability, but the accuracy would be at the individual level.
https://xkcd.com/927
The FHIR standards already allow for nearly all of this. I work in this space.
https://www.moderndescartes.com/essays/deep_learning_emr/
In the beginning of the pandemic, NIH issued a few NOSIs (Notice of Special Interest) for proposals on broad questions of SARS-CoV-2, including molecular biology and etiology. These dried up within a couple of months, even before their expiration as the small amount of money allocated was already spent. Now the only NOSIs are around downstream effects of COVID, messaging to underserved populations, and the like - nothing on biology or new treatments: https://grants.nih.gov/grants/guide/COVID-Related.cfm#active
In 2020, I submitted an R21 (two-year grant for exploratory research - $275k) to NIH on one of the SARS-CoV-2 proteins, for which we had a couple of publications already. It wasn't funded with the feedback being "there's already too many people working on this". Totally fair on the merits, but you might have thought in the first year of the pandemic that NIH would have wanted more people turning their research to SARS-CoV-2.
Talking to others, mine was a common experience. A collaborator had already developed a promising small-molecule inhibitor that not only worked in vitro against the virus, it even worked in a small humanized mice trial. They submitted an R01 (typical 5-year NIH research grant - ~$2m) proposal, which was not funded with typical stock criticisms.
Now, I don't want this to come off as a "woe is me" post; this isn't to say that my or my collaborator's proposals should have been funded no questions asked! But it's not just us. Having been on the other side of the equation, reviewing proposals for NIH in the last year, I can say there was no special attention paid to COVID-related proposals (if anything, it was more like an eye-roll at yet another one).
Maybe this is kind of self-serving, but it seems a bit tragic to me that after spending trillions of dollars to manage and mitigate the effects of COVID, we can't shake loose even a few billion for basic research.
Addendum: I have defended NIH before as the best way to decide what projects should be funded, and I still contend it's significantly better than the private grants that a lot of people are fond of. My complaint is that the high-level decision makers (e.g., Congress) didn't really prioritize SARS-CoV-2 research.
I review grants for an institute that offers lots of small pilot grants with seed money to get programs of research up and running. We put out calls, and ran extra sessions to get money out faster, etc. People just submitted whatever they wanted to do anyway with the word "COVID" worked in there somewhere. We did this for the first year. I only ever saw 1 application that was legitimately on COVID. (Which wasn't funded...)
I'm almost afraid to ask what a "humanized mice trial" is.
Mice bred to resemble humans in certain key ways. It's a standard in pre-human trial research, though I'm not enough of a biologist to know if there are multiple lines for different types of human equivalency or how similar they actually are.
It's mice that are being groomed by Disney for don't ask what.
Mice can get through the tiniest of openings, which means that they can sneak into your home and groom your children.
Sorry for the spoiler, but someone had to blow the whistle.
Two broad types: one is not so much bred but is genetically engineered to possess bits of human DNA. The other type is literally grafting bits of humans onto (or into) the mice, such as human tumors or replacing mice immune cells with human immune cells.
Well that can’t possible go wrong.
Well that can’t possible go wrong.
In this case, I believe it just means they engineer human ACE2 into it so SARS-CoV-2 can infect it.
I get the “too many people already working in the field” but they really ought to be prioritizing groups that already have some experience with it and have some promising results.
I don't have numbers in front of me but I feel pretty confident that a majority of NIH grants were directed towards SARS-CoV-2 research. I'm skeptical that they didn't prioritize it. I think it's more likely that they were simply flooded with requests and had to turn a lot of them down. Do you think the NIH should have been given more money or that they aren't doing a good job at picking which research to fund? If it's the latter we would need to look at which grants were approved.
A majority of NIH grants were absolutely not directed towards SARS-CoV-2 research, nor would I expect them to be! It's a nearly $40b agency in a typical year; you don't want everyone to drop everything they're doing just to work on this.
https://covid19.nih.gov/funding It looks like they've spent almost $5b on COVID-related research (over two years), but that covers everything from basic science to trials to vaccine delivery. I don't know want to come off as complaining too much about this, but I imagine most people would have assumed like you that there was a huge pivot to COVID when it really wasn't the case at all.
"the government agencies whose job is to hand out the grants are so ossified that it’s hard to wish they had billions more to play with"
FWIW this seems to be true across the agencies, not restricted to the healthcare related shops. You've written a lot about the need for the agencies to be more pragmatic and nimble, less process- and precedent-bound. I would very much love to see a post on how they could fix this, on a practical level.
Someone needs to do a really good longread about Jim Bridenstine's time at NASA, because NASA is one of the most ossified and non-nimble agencies out there (as Nelson is returning it to being), and Bridenstine managed to get it to operate things like the commercial crew program and the new lunar lander process in completely different ways, which is why the US has a human-launch system again, and why it's believable that Artemis might actually get to the moon as it wasn't before Bridenstine.
I think this gives a little too much credit to Bridenstine. The commercial crew program dates to the Obama administration, while its predecessor, the commercial orbital transportation system program, dates to the Bush admin
One of the real pleasant surprises from the Trump years was how capable and committed Bridenstine was as NASA administrator. He really focused on the agency's core mission and worked well with people, speaking as an outside scientist through that time. But Theme Arrow is also right that the previous administrations really started the cultural and program changes that he carried forward.
I'm curious to learn more about the operations of these agencies. I will actually be starting a position at NASA HQ in a month, moving from academia. This is in the science directorate, not the human exploration directorate, but it will be interesting to learn how one large legacy bureaucracy (NASA) compares to another medium-sized legacy bureaucracy (private research university).
On the original comment, about being reluctant to trust these agencies with grant money to distribute, count me among those who things they do a pretty good job. It is certainly not perfect, and there is a healthy debate about the balance between funding useful incremental work versus riskier transformative work, but that is really a high level strategic decision by the agency leadership and the congressional committees. So I don't think they do a bad job of awarding taxpayer money in research grants.
This sounds interesting but I also feel like the answer here has to be something other than just good leadership? Good leadership is obviously very important! But it's also hard to find (or basically luck?) and not really replicable or durable. What would legislation or agency replacement or something larger scale and more permanent actually look like in practice??
Honestly.....it can hurt a lot. Partially just because potential grantees are trying to think of projects that will get funded, which is not necessarily the same thing as a "good" project depending on how the funding works. The agencies should be able to fund work that is actually effective!
Also many of these agencies have a variety of functions, including funding but also regulation and consensus development or leadership on standards, and the results of funded projects play into the other functions.
I don't think it is necessarily true that rigorous process is a great way to fight bias. I think as often as not, processes are used to launder bias, making a biased decision appear legitimate because, hey look we're just following The Process.
I don't think there's actually a substitute for getting good people in place making smart decisions on a case-by-case basis.
I think exactly the challenge here is figuring out how to have good results with minimal downside risks, yes. It's really hard! But it's also the case that the processes we have don't really protect us from an outcome like "pet projects get funded" in part because some of those precesses are so hard to navigate that only pet projects are able to do it.
When you're administering a grant program where 5% of applicants get the grants, then every page of "process" paperwork that applicants have to do is multiplied by 20.
Pharma scientist here, who bridges R&D and CMC functions.
In addition to insisting upon a proven benefit for any drug product or treatment, the FDA is all about investigating and controlling risk. This is good.
Test: 1) Think about all the times the FDA got it right. 2) Now think about all the times the FDA missed something.
You probably scratched your head the first time I asked you to think, and you could come up with some examples for the second time, all of which make you skeptical about “big pharma” and the utility of the regulations in place to protect consumers.
What’s going to happen if the FDA misses something, and a drug product that is not efficacious, or a drug product that is unsafe, reaches the market? The FDA will require more proof of safety or efficacy in the future. The regulatory burden increases, the complexity of the studies needed to show safety and efficacy increases, the cost increases. More risks to patients might be caught before the product rolls out (that is good!) but now we have built a very slow and expensive apparatus for making sure we catch those risks in advance. Many incentives push toward MORE complexity, more testing, and slower rollout.
I’m not sure how to undo that, or if we should try. I like the idea of taking compounds that are proven safe and testing their efficacy for other indications — you could lop off a whole host of safety requirements with such an investigation. Agree with Matt Y that the government would be the ideal funder of such studies.
I think there’s a broader comment to be made here about risk tolerance. How much are you willing to do to drive down the risk of something bad happening, and how much does it cost?
We spent the pandemic years disagreeing with each other about that. But it’s something I’m noticing more and more — contemporary society accepts far less risk than it did just a couple of decades ago, and part of the price we pay here is in speed and agility.
And yet, by accepting "far less risk" on medical research and drug approvals, we have in fact taken on far more risk that we won't be able to address new issues like COVID. Moving slow on COVID almost certainly killed more people than doing faster research would have - a myopic focus on *risks to study participants* ignores the much larger *risks to the general population*.
I don't have an in-principle objection to 'n of 1' self-experimentation with active pharmaceuticals, although for patients with terminal disease I'd hope they would enroll in clinical trials so their experimental treatments can also serve the broader public interest and advance our knowledge base. I do think very little 'n of 1' experimentation has effect sizes that exceeds what you'd see from placebo, but the placebo effect is great! Especially if you couple it with improvement in diet, exercise, and sleep hygiene.
The problem comes from the opportunistic snake oil peddlers who deliberately obfuscate data on efficacy in order to sell their preferred brand of Placebo Plus. The information landscape in biomedicine is already incredibly difficult for consumers to navigate. I'm all for more efficient regulatory approval processes, but nervous about anything that further decreases the signal-to-noise ratio when it comes to clinical data.
There's a missing third category: Think about all the times the FDA should have approved something but didn't. That has big costs too (including in deaths), for all the patients who could have been helped but weren't. "Risk" is not all on one side.
The FDA process delays time to market for treatments, but I'm hard pressed to think of examples where they have just flatly denied approval for something efficacious.
The issue here is one of base rates. Most pharmaceutical compounds don't alter meaningful clinical end points. The few that do work tend to have small-to-moderate effect sizes. Your regulatory process should be optimized for that underlying reality. I don't think the consumer would be well served by flooding the market with placebos in order to increase time to market by 1-2 years for the handful of drugs that actually work. We've seen with COVID-19 how easy it was for (very smart people!) to fool themselves over and over with shoddy clinical data on experimental treatments. Now imagine that reality, but for every disease.
"The FDA process delays time to market for treatments, but I'm hard pressed to think of examples where they have just flatly denied approval for something efficacious."
I think the more likely result is that because the process is so slow and expensive, you have many ideas that are not attempted.
I think some libertarians used to give beta blockers as an example. They've been approved for a long time now, obviously, but tens of thousands of people died from preventable cardiovascular events while the FDA was studying their safety and efficacy.
The quality of research now is much better than in the past and we've actually developed an ethical framework. I think a lot of our regulatory burden is fighting yesterday's battles.
But I agree that the public's perception is still based around the failure of the old models. Maybe it will take seeing the failure of the current "old" models to evolve (until the next failure).
I'm retired from the biopharma industry and put out a daily newsletter on COVID during most of 2020, stopping when the first vaccines were authorized for experimental use late in the year. I was equally frustrated by the slow pace at NIH and they should have done more to harness the vast clinical trial networks in the US. Most of the early intervention trials were quickly designed and up and running in the UK (first to show the benefits of steroid treatment). NIH had experience in both the HIV/AIDs and cancer areas where they set up large multi-center networks some of which are still up and running (I wrote up a white paper on how this could be adapted to COVID research). This has to go down as a failure even though there are NIH efforts that are ongoing right now.
Observational trials are also incredibly useful. I was part of the industry group that developed and funded an exploratory approach to the use of observational data from medical records to look at both drug safety and efficacy. We involved FDA in the planning of this and, not wanting to get into all the gory details, it successfully morphed into a large multi-national group. The Observational Health Data Sciences and Informatics (or OHDSI, pronounced "Odyssey") program is a multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. They mobilized quite early in the pandemic and set up an number of good research protocols to look at a variety of interventions. As commentator Allan Thoen notes below, these kinds of research protocols can be useful in ongoing work.
I don't know whether one can put up web links in the comments section. there is a very good YouTube video up on the TOGETHER Trial by 'Biotech and Bioinformatics with Prof Greg' Google will be your friend if you want to watch it. It's a good description of new trial designs. Such designs are being used by industry and Matt should have been more clear on this.
As someone who has done clinical research, there really is a huge burden on individual scientists. We have to secure funding, develop protocols, get it all through IRB, do enrollment, write and submit to journals in hopes of being accepted for publication, etc. Just tons of veto points. Industry-sponsored research is easier because the company helps do much of this stuff even if the only direct financial support is in-kind (i.e., not making you pay for their trial product). I think we have over learned the ethical failures of past research by creating a vetocracy that's creating its own potential harm. (Sound familiar?)
There really should be projects that are "nationally approved" so that individual clinicians don't have to deal with the local administrative barriers. Just need to have a patient sign a premade consent form, the doc follows the protocol, and data is directly accessed by the central agency (or delegated research team) via the EMR.
Most patients are treated without being OFFERED to participate in research, much less a prospective study, so enrollment takes much longer or we just end up with a bunch of underpowered pilot studies that never develop further. I want to actually participate in more research but it's just too hard to get off the ground, especially since it may not even be publishable and our clinical load is perpetually at burnout levels.
I’ve spent 20 years working on how to get medical evidence into practice faster and I think you are understating the impact of the Republican party’s decision to adopt anti-science/anti-public health as a core principle. Before 2009, this was something that could be openly discussed. Now it’s confined to the Democratic Party and is seen as too wonky and risky to spend a lot of political capital on. If Democrats had enough power to push through multiple objectives, I think we could do something about the ossification fairly quickly. But not as long as every attempt at public health is blocked and turned into an attack by Republicans.
Would you elaborate more on this...
What are the most promising processes you've seen to get medical evidence into practice faster which have been stymied by Republicans?
More generally, what made 2009 a threshold year? I would have said this has been trending downward for far longer, but was there some kind of cliff that year?
After initially describing COVID as a hoax, the Republican Party extensively opposed miraculous vaccines for the worst pandemic in a century while hawking endless quack medicine cures.
As to 2009: death panels.
Palin "charged that proposed legislation would create a "death panel" of bureaucrats who would carry out triage, i.e. decide whether Americans—such as her elderly parents, or children with Down syndrome—were "worthy of medical care""
"Palin's claim was reported as false and criticized by the press, fact-checkers, academics, physicians, Democrats, and some Republicans."
https://en.wikipedia.org/wiki/Death_panel
Thanks for sharing! It makes sense to me that both of those impacted public opinion in very negative ways. I wouldn't have thought it had as much impact on the internal working of health agencies, but the pressure must have a negative impact there as well.
I didn't care for the recent prescription drug cap policy because I think we should just institute a QALY system similar to the NHS. The labeling of CER as death panels might have made that politically impossible for a while which is just tragic.
I mean, I find it pretty curious that you asked for examples. Fauci has been getting death threats, De Santis calling for his imprisonment, and someone was actually arrested supposedly on their way to attempt to kill him. There has also been extensive state level legislation to inhibit public health efforts.
I'd've thought it's very obvious it's a bad environment for public health efforts.
Much of the Republican response to the pandemic has been terrible, but I was viewing agency ossification as a separate problem and didn't see the overlap. The things you highlight point to me how the broader public discussion can lead to agencies being very defensive which would exacerbate the ossification. I'm still mildly doubtful that if the Republicans had the same response as Democrats the agencies would have performed much better - though I hold that opinion very loosely and its likely influenced by how badly I think they did overall.
The death panel thing is a separate also tragic part of American politics where parties will use something that brings short term political gains but long term costs to the public.
Binya nails it- in 2008, Newt Gingrich was a vocal proponent of comparative effectiveness research. In 2009, Republicans were pointing to CER as death panels. Under Trump, evidence and expertise was branded as automatically suspect and policy was openly based on lies.
Probably a third of the discussions that take place here boil down to “state capacity is important and it’s near-impossible to cultivate when one side refuses to.”
Sucks, but there’s nothing to be done about it at the federal level. Get back to me when MN, CO, or VA is willing to do some trailblazing.
I wish it were universally true that only one side refuses. But “plow money into inefficient-to-useless patronage mills” emphatically does not count as building state capacity, and at the state and local levels that appears to be the primary concern of the democratic party in the areas where they face no serious competition at the ballot.
Agreed. Also meant to be implied by my second sentence. Note that NY, MD, NJ, CA, and MA don't appear on that list.
One of the things that makes long Covid so difficult to study right now is that we don’t really have a good case definition which makes it difficult to recruit patients. There was one good quality study that found that the only symptom that was more common in people who had had the virus vs those who had not was loss of smell. That study was probably not large enough to detect rarer syndromes like the post viral syndromes we have with other viruses an almost certainly occur with this virus. Still, it suggests that the estimates that 10% of Covid cases lead to long Covid are a massive overestimate. I’d say that’s actually a good thing.
One of the problems with a lack of high-quality science is that low quality science ends up being used when estimating the incidence of stuff like long Covid. Some of the studies with high estimates of long Covid don’t have a control group, which is really important because the pandemic had massively disrupted the lives of everyone and that will affect everyone’s health status. Others do not confirm that the people in the study have actual evidence of Covid infection (positive pcr test, antibodies, T cell response) in order for them to be included in the long Covid group. And the NIH should be demanding that the science it funds is high quality because otherwise we waste time on stuff that has little chance of working.
Unfortunately I suspect long Covid has been too politicized in the US to ever get useful data. Honestly though, the US is bad at this type of research anyway so I have a lot more faith in some European country to get valuable data at some point. And if no European country considers it worth researching, that’s a strong argument against its being a significant problem beyond what we already know about usual post-viral syndromes that we already know exist
I am 100% on board with the sentiment that the clinical trial could and should have been done faster and it is a tragedy that more hasn't been done.
I think there are a few extra facts that might add some nuance to the situation:
1) The NIH did in fact have around 6 (non-vaccine) platform trials as part of its ACTIV trial research program (and associated ACTT trials). But they are largely too small and too slow.
The best known output of this is in the ACTIV-2 trial (outpatients) which by itself tested around half a dozen monoclonal antibodies under a common protocol so that results could be compared and the trials could be supported by NIH.
The biggest disappointment is the ACTIV-1 trial (hospitalised patients). Basically all the hospitalised patient recommendations come from the UK RECOVERY trial which has randomised ~44,000 patients and figured out dexamethosone helped by mid-2020. ACTIV-1 has (I think?) randomised around 3,000 which may be quite underpowered to detect small but important impacts on mortality.
Most hope is for the ACTIV-6 trial, which is the US govt trying out a structure like the ones you've promoted here. Started way too late, but they actually have 2 different doses of ivermectin (probably won't work but good to really put a nail in this coffin) and also fluticasone and fluvoxamine (much better chance of working and are cheap).
https://www.nih.gov/research-training/medical-research-initiatives/activ/covid-19-therapeutics-prioritized-testing-clinical-trials
2) The PANORAMIC trial in the UK may be the fasting enrolling therapeutic trial in recent history and has a jaw-dropping ~22,000 patients randomised to Merck's antiviral (molnupiravir) and has done this over the past 4-5 months. This is already around 10 times the size of Merck's original controversial trial! It could give a much cleaner answer than the original trial did. Interestingly, the trial protocol includes a pre-specified 'economic analysis' as well to basically figure out if giving these antivirals to less at risk (50s and vaccinated) people is worth it to the NHS.
https://www.panoramictrial.org/
Our ability to reduce mortality is hugely hampered by the slow recruitment of these trials and I hope someone who can do something about it reads your piece and acts.
As an aside, I was offered participation in ACTIV 6 when I was diagnosed with covid in Jan. However, I had a mild case that was improving by the time I was contacted a day after the positive result, which was a few days into symptoms (probably omicron, 3x vaccinated) and my personal cost-benefit assessment was that no personal medication side effects were worth taking a new medicine at that point, so I passed.
A few notes here:
The fact that the NIH has done poorly with their Long COVID study is depressing but unsurprising. Government institutions serve a particular function well, but speed is not their focus (unlike pharma/biotech where every single day a drug is not on the market is costing them money and have vast experience with setting up and running clinical trials). This, unfortunately, is a problem without a simple solution. Who stands to benefit with Long COVID research? We all do, but not one particular company, so there's no profit motive to go fast.
Related to this is how the vaccines were actually developed -- there was a clear need for something to be developed quickly, so hundreds of companies started working on it. Most of them failed! But they were willing to devote the resources to doing so *because of the profit motive at the other end.* Albert Bourla (CEO of Pfizer) received a ton of praise for refusing the government's money to develop their vaccine... but he knew that the payoff would be huge on the other end so it was a worthwhile investment. Pre-pandemic, many companies had small teams (if any) working on things like novel coronavirus drugs/antivirals. A lot of them simply decided it wasn't worth it since there was no guarantee of a buyer on the back end. The system worked because governments were (rightly) willing to pay for the drugs if they were successful.
I strongly recommend this writeup of Pfizer's discovery and development of Paxlovid, which gets to this very point: https://cen.acs.org/pharmaceuticals/drug-discovery/How-Pfizer-scientists-transformed-an-old-drug-lead-into-a-COVID-19-antiviral/100/i3
"What’s more, Pfizer already had a lead in hand. In 2003, researchers at the company developed an antiviral, known as PF-00835231, that could block the main protease of a coronavirus that emerged in 2002 and causes severe acute respiratory syndrome (SARS). But by the time they were ready to test it in patients, the SARS outbreak had been contained. PF-00835231 is structurally similar to a peptide that binds within SARS’s main protease. That binding site in SARS is identical to the one in SARS-CoV-2, so Pfizer researchers thought the molecule could work against the new virus. Tests showed they were right."
Stuff like this makes me grateful for billionaire philanthropy, and pushes me towards libertarianism. Especially depressing: no other government in the world funds these studies either. Why are all governments broken in the same way? (I am genuinely confused)
Maybe it's not good that the government does all the science funding. It would be an interesting experiment to have some private entities allocate some of the grant money and see what they do differently (e.g. from what I hear, https://fastgrants.org/ has done a good job with money from private philanthropy).
The US is by far the best resourced government in the world. No other country comes close to the combination of scale and wealth. People miss this because China is so big, but GDP/capita in China is 1/6 of America's at market rates and 1/3 at PPP. They've got to prioritise delivering things that most Americans take for granted. It's very unfortunate for humanity that the best resourced government is so dysfunctional.
That's why we need to double down on what we do well. Find ways to make private healthcare more competitive & abundant, etc.
Perhaps we need to face the reality that government is so broke in this country that as many resources as possible need to be routed to private enterprise.
And we need more eccentric billionaires and more effective altruism.
That's what we do best. Let Europe take the lead on innovation in governance.
It's not like Europe has done a ton of great studies on COVID either though...
Do you mean "broke" or "broken"? Broken, certainly, and that's the point Matt is making. But there's no scarcity of resources at the federal level; it's just a question of deploying them effectively (or at all).
Even in a country like the US, where the federal government only pays a fraction of total healthcare costs, medical research is a very clear example of something that pays for itself. Even if you have to borrow now to do it, you'll be borrowing a lot less down the road to pay for Medicare and Medicaid. There's no resource constraint at all here.
“Let Europe take the lead on innovation in governance”
Europe’s “innovation” in governance gave them, for example, laws that make insults criminal offenses and murderous, radical islamists in their midst. No thanks.
Or… hear me out:
There was never any good reason to expect ivermectin to have any impact on Covid and it seemed wrong from the start so a trial investigating its efficacy was a waste of time and money. Same with hydroxychloroquine, lopinavir, etc.
There was no clear mechanism of action, no clear theory of why these drugs would be effective, and the people promoting these drugs to the public were scammers and quacks (and ended up running online businesses to prescribe and sell these cheap genetics at huge markups).
Fluvoxamine is a notable exception. Or is it? The suggestion came not from delicensed physicians and podcasters on Facebook and Twitter but from researchers already investigating the drug’s other uses and had plausible mechanisms of action, namely the anti-inflammatory effects through serotonin transporter inhibition and the sigma-1 receptor agonist effects blocking COVID’s use of those receptors. And, indeed, other SSRIs seem to hold similar benefits because they function similarly although research is ongoing.
Do we need a clinical trial (even a fast one) every time someone suggests anything? Are there not ways to evaluate the quality and possible efficacy of a suggested treatment before engaging in the scientific process? Or will we need human challenge trials to prove injecting people with bleach is ineffective?
These alternative treatments were never about finding a better treatment for Covid. They were about ideology and they were about a few opportunists making a quick buck off a desperate and fearful population.
If a lot of people are interested in a treatment, that seems like a good reason to investigate it, even if objectively it isn’t likely to work. You can stop people from getting scammed!
If you want more trust in public health, seems like being responsive to the actual questions the actual people you serve have, rather than the questions you wish they had, would be wise.
For every study you do, you don't do another one. Even if the financing comes from private sources, the resources for reviewing the trial (i.e., FDA personnel) are limited.
It’s absurd to say the US federal government - the richest organization that has ever existed - does not have the resources to study this pressing question of broad public interest. What we need is more studies - especially on things people actually care about - not more careful selection.
I wish that was how this kind of thing worked. But, here, look at what the crazy people say when confronted with high quality evidence that ivermectin doesn't work: https://twitter.com/CaulfieldTim/status/1510306569409290241?s=20&t=JNC-6kUTvrta1dQtWYOnaw
Proponents of ivermectin are encouraging parents to give it to their sick children via discord, whatsapp, and telegram chat groups. People are being caught trying to smuggle ivermectin into hospitals to give to sick relatives when the hospital won't. A crank lawyer in Wisconsin is pursuing a legal campaign against a hospital system because, she argues, every patient who dies could have been saved by ivermectin and is, therefore, a homicide.
Nothing that contradicts these people's worldview is going to matter. I don't care how well designed the trial is, how well managed the data analysis, or how credible the institutions and backers. The people interested in this treatment are not interested because they think it works. They are interested in it precisely because our medical and research institutions say it does not work.
[additional thought added as edit] One way to think about this is that you can't "umm, well, actually..." people into trusting public health. Throwing "the science" in their face isn't persuading the, I dunno, 30% of the country that is deeply skeptical of government, education, business, etc. Do you think Florida's surgeon general is going to change his stance and recommend vaccines for kids (not mandate, just recommend!) if there is a new well done clinical trial showing once again that the vaccines are safe for kids? No. He's going to continue recommending against vaccinations because the efficacy and safety and protection for kids against disease don't matter.
This reasoning does not make any sense. I'm sure it's true that the most extreme people aren't persuaded by the new evidence. But that doesn't speak to whether *anyone* is persuaded by the new evidence.
On any issue, most people are not extreme partisans. Probably most people who are interested in Ivermectin heard about the promising early studies, are suspicious that Big Pharma isn't willing to follow up because it wouldn't be profitable (true!), and would be grateful to hear that someone did follow up and it turned out not to work.
I could see that and am open to the idea that I'm overestimating the number of extremists.
Yet, I'm not sure where to draw the line on what to investigate. Lots of people believe in astrology, healing crystals, the power of prayer, zinc and vitamin D as cure-alls, and the infamous bleach injections. At some point, there has to be a threshold for "we study this because it may make a positive difference in treatment or public perceptions" vs "we study everything anyone wants us to study no matter how zany and unlikely". Maybe, given the public health implications, ivermectin research was worth it. I'm not so sure but I'm willing to entertain that I'm wrong here. But the principle "study things that people are interested in" has to have limits or it's just as big a waste of time as an arduous and inefficient FDA process.
Honestly I’m on the fence about the ivermectin question. I think there was virtually no chance of it being effective — I think the in vitro effect required concentrations that would have been dangerous in a human. On the other hand, if it is being used there is a good argument to test it so that we can say “these are the results”. If it’s mostly political then there are plenty of people who will not be convinced. Hopefully if you do it early enough, you would have results before if becomes part of the political alliegance.
I guess what I’m saying is that when it comes to clinical trials we might need to remember the public health maxim “meet people where they are”. Ideally we wouldn’t have to trial low probability drugs but our world is far from ideal right now.
Regarding the footnote about the FDA requiring proof of efficacy... part of the reason for this is that FDA approvals are based on a relative assessment of benefits (efficacy) vs. harms (safety).
Safety is not a simple binary where all FDA-approved products are proven "safe." The FDA has signed off on the safety of, say, the OTC allergy medications (Claritin, Zyrtec, etc.) and also of cytotoxic chemotherapies. In an absolute sense, the former are obviously MUCH safer than the latter, and, indeed, in most medical settings outside of cancer, cytotoxic chemotherapies would be considered unsafe.
I don't think the idea that the FDA should switch to "safety-only" regulation is crazy, but it would be complicated. Maybe the FDA would need to rate safety on a sliding scale with 1 being "safe enough to market as an OTC allergey med" and 10 being "unsafe in most contexts buy accepable as a cytotoxic chemotherapy."
In theory, individual physicians are supposed to be expert enough to make a customized risk-benefit calculation for each individual patient when they decide to prescribe a particular drug to that patient. Physicians are generally allowed to prescribe anything that's legally available for any reason they in their personal professional opinion believe will, net-net, improve the patient's health, whether it's an off-label use of an FDA approved drug or a legal drug that hasn't been reviewed by the FDA at all, and the main legal limits on what they can prescribe are tort malpractice standards (that's why there's a "learned intermediary" doctrine that protects drug manufacturers from liability for prescribing decisions of a physician that result in the patient being harmed by a drug, as long as the drug company disclosed known risks in the product label or the physician independently knew the risks).
But as expert as physician might be, and as much as they sometimes chafe at evidence-based prescribing standards, practice has shown that they, like all humans, are subject to making irrational, non-evidence-based decisions (witness the number of docs pushing quack covid cures).
So there needs to be some evidence-based expert oversight body. Currently that role is filled by FDA and insurance plan formulary and drug utilization review committee. Maybe someday we'll get to the point where FDA can drop back and the private sector can fill that role entirely, but we're not there now, and I think it would require considerably more consolidation on the payor/provider side of the industry for that to work, because currently payors and providers don't have the capacity to fully replicate what FDA does.
Can I just second the sense of the despair about institutions and politics?
Bioethicist are to important medical research what environmentalists/housing advocates are to new home construction? They are so caught up in their own bull$hit that they end up doing more harm than good?
And while there’s most certainly a need for them they need less power and less control of policy.