How much does public research funding affect drug development & market success?
Two papers have looked at this (h/t @mattsclancy @Atelfo ).
Based on these, I ran some quick calculations for ME/CFS and Long Covid
🧵
1/
@mattsclancy @Atelfo Toole (2012) found that 1% increase in NIH funding increases new drugs (17-24y later) by 1.8%. Or about $706M in 2010 USD for 1 drug approval sciencedirect.com/science/articl…
@mattsclancy @Atelfo Azoulay et al. (2019) find that $10 million in public funding yields 2.7 new patents (though only 1.4 in the same disease area!)
Only 1 per 116 patents in their database is linked to a successful drug. So, $430 million in cumulative public funding needed for 1 drug approval
@mattsclancy @Atelfo So, where do ME/CFS and Long Covid place on this?
ME/CFS has received, 2008-2024, only a paltry $157M from the NIH.
Adjusted for inflation, that's ~$137M in 2010 dollars.
Only 19-32% of the way to a single approved drug
@mattsclancy @Atelfo Long Covid had received cumulatively about $1.8B from the NIH, that's about $1.4B in 2010 dollars
The models predict 2.02 to 3.31 drugs, based on just this amount. In 17-24 years after funding though..
@mattsclancy @Atelfo The picture looks pretty bleak for ME/CFS. We definitely need more funding for it!
Also, improvements in research quality and improvements in market incentives would significantly improve the # of expected drugs.
@mattsclancy @Atelfo The research shows significant spillover effects. In fact, more patents (2.2) were filed for *other* indications, than for the original indications for which the NIH grants were (1.4)
LC research is especially likely to spill over to ME/CFS! But it can also be other research!
@mattsclancy @Atelfo Last, please bear in mind that these were hastily created calculations (limited spoons), and I may have misunderstood something. I didn't even read the papers!
I converted the Azoulay patent amount into drugs
Also, in my interpretation here, there are no diminishing returns.
@mattsclancy @Atelfo This was inspired by this blog post by @atelfo
with a long list of interesting questions about biotech,
What's interesting about these findings, is that they had moderately strong correlation with CPET performance! That's in addition to there being a clean separation between healthy controls vs. LC* & ME
The x-axis appears to be a composite measure of 'CF * COL4 (Lumen)'
(Lumen is the internal space of the capillary, through which the blood flows)
In the caption it's called 'CF*Lumen'
I don't know what CF means. Capillary flow? Collagen fiber?
Also, you can create all kinds of composite measures, so this increases the risk of p-hacking.
But if we leave that aside and assume that there's a benign reason for it, we're left with some fascinating results.. because these findings aren't unique to LC/ME
This is an important point: not only patient-reported outcomes (surveys) are susceptible to bias.
Many supposedly "objective" measures are susceptible to 'effort effects': people trying harder because they expect they can do more, expect less PEM, want to please, etc.
Some examples with high risk of 'effort effects':
- 6-minute walk test
- grip strength
- any other "simple" exercise performance
- realtime brain activity
Lower but still some risk of effort/subjective effects:
- daily step count
- work hours
- sleep data
- daily time upright
- probably some exercise metrics (e.g. difference in VO2Max on 2-day CPET)
If you want to get better at evaluating science, I can highly recommend the book Science Fictions by @StuartJRitchie
I just finished it, and it has really emphasized the many issues with science & I learned a lot!
A few takeaways 🧵
The image above illustrates it well: looking only at registered trials & their primary outcomes, only 50% of trials found a positive effect of various depression treatments.
However, through publ. bias, outcome switching, spinning results etc, the literature looks much rosier!
Treat every paper with the scepticism for a “CBT for ME/CFS” paper
Just because you like positive results, doesn't mean they're well-supported
Sample: they focused on the most severe patients, who had at least some measurable biological dysfunctions (POTS, microvascular, endothelial, pulmonary)
I really like this: presumably easier to find abnormalities in extreme population
Severity doesn't seem measured via scale 🫤
Increased IgG to SARS-CoV-2 in Long Covid vs. convalescent