In this one clip from his latest video, you can see the complete blindness of Vinay Prasad to the cult of RCTs.

He contradicts himself in a messy jumble of confusion because he's making a good point but can't go far enough.

Let me explain 🧵
First, in his rush to defend RCTs, he presents a study comparing meta-analyses of RCTs with meta-analyses of observational trials (or big trials of both types).

He forgets that meta-analyses themselves are observational.
And that includes meta-analyses of meta-analyses too.
So he uses an observational study of observational studies on observational or randomized studies to prove that... what? That observational studies should not be trusted?

You'll need to use another tool for that job, son.
pubmed.ncbi.nlm.nih.gov/15929750/
Then he looks at the results of that meta-meta-analysis of sorts, and sees that observational and randomized trials agree in 2/3rds of the cases. Weird! You'd think that with so much noise, that wouldn't be the case.

But even so, that tells us nothing about which is better.
Then he has his triumphant moment when he finds that the comparisons agree with "statistical significance" in only 1/6 cases.

He forgets to inform us that statistical significance is a completely arbitrary threshold in this case. Because meta-analyses are observational.
And even if it wasn't, it STILL doesn't tell you anything about whether observational trials or RCTs are better.

What you need for that is to see whether RCTs get the right signal AHEAD OF TIME.

This is where the second facepalm moment in the video comes in.
Unfortunately I can't find the AHRQ report he talks about, and he doesn't link it, but from context I surmise that what he is seeing is that an older meta-analysis of RCTs on lung cancer and selenium, and a more recent meta-analysis of more recent RCTs on same are in disagreement
First of all, that doesn't mean the newer meta-analysis is more correct, especially if they dont cover the same studies. Remember, Meta-analyses are observational!

But even if it did, that means that the prior meta-analysis of RCTs was wrong. So why should we trust the new one?
He's using this to claim that even the little concordance that was left is going away, but instead what he accomplishes is to demonstrate that meta-analyses of RCTs are just as bad, and that they reverse themselves just as easily.
So while he's preaching the gospel of how nutritional science is garbage, and he's right about that, the problem isn't with observational studies. The problem is that the brand of statistics used for "evidence-based medicine" is junk, and that includes his beloved RCTs.
The reason nutritional science sucks is the same reason as modern medicine. They isolate a specific question in a chaotic system and try to give definitive broadly-applicable answers. Answers that often doesn't exist. So they "hallucinate". Just like LLMs.
The truth of the matter is that the "analytic flexibility" he bemoans in observational trials, is alive and well in RCTs. There is a metric ton of wiggle room to nudge results in either direction. Until the day he comes to terms with that fact, he'll continue to be lost.
This stuff isn't too complicated to understand for people with engineering or similar backgrounds. It's just that the majority of people who understand it are in medicine and there's no money in calling your entire profession out. Or motivation. And those who do get discredited.
So ultimately, what gets presented as sophisticated statistics is just a coping mechanism for a privileged club.

I mean, just think about what a meta-analysis is supposed to do. You have many studies (randomized, even) contradicting each other.
So what are you supposed to do? You plug them into a formula, and supposedly the result should be closer to the truth than any of the studies.

But wait a minute. How is this science? If in physics multiple experiments disagreed, we wouldn't average them out.
Why is nobody asking WTF is going on when multiple randomized clinical trials with sufficient statistical power end up with opposite conclusions? Are we supposed to attribute this to chance? Is something else going on? Is perhaps the answer different for different populations?
Who cares! Throw it into DerSimonian-Laird and hope for the best!

And somehow, it gets worse.

Evidence-Based Medicine (as distinct from evidence-based medicine) doesn't have any notion of cost-benefit analysis.
In EBM as practiced, A deadly adverse event below the statistical significance threshold is not real, but a benefit, even cosmetic, if it clears statistical significance, is recognized.

And thats how you get FDA-approved hairloss prevention medication that can ruin your life.
...and this is why, even someone like Vinay Prasad, who tries hard to be disciplined and stick to strict EBM (most of the time) gets lost in the statistical contortions and starts spouting nonsense.

And in the comments, hundreds of practicing doctors applaud him. Could be yours
I'm sorry I don't have a hopeful answer here. I'm as lost as you are. All I know for sure is that the people spouting numbers can't even keep their story straight. That's something, but it's not much.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alexandros Marinos 🏴‍☠️

Alexandros Marinos 🏴‍☠️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexandrosM

May 19
Just don't look at the quote tweets for this.

Don't.
Don't do it.

You had to do it, didn't you.
I LITERALLY TOLD YOU NOT TO LOOK AT THE QTs.
Read 4 tweets
May 18
By now I most people have seen the exchange in the vaccine portion of the interview between Krystal Ball of Breaking Points and @RobertKennedyJr.

What I want to do is to dig deeper into the scientific papers each side submitted after the debate, to support their positions. 🧵 Image
If you haven't seen the video yet, it's here:

What I want to do in this thread however, is go through the 3 studies offered in support of their vaccine arguments, 2 by Krystal and 1 by Kennedy: Image
One of the studies Krystal is this one, as presented in a press release by Stanford Medicine. It is from 2021 data and shows much benefit for the vaccines, especially for people already infected.

med.stanford.edu/news/all-news/…
Read 13 tweets
May 16
Google Bard at its best and worse.

Asked if the spike protein is cytotoxic (universally "fact checked" as "false") in summer 2021 despite the evidence, it responds that, yes, it can be cytotoxic, and adds important detail.

...then it recommends vaccination as a counter. Image
When I asked it "hey, mRNA triggers the production of spike protein by my own cells, that sounds like a bad idea?" it responded that vaccination is safer than getting infected.

Then I reminded it that vaccination does not prevent infection and it responded that "it prevents… twitter.com/i/web/status/1…
Then I pushed it on the masking recommendation. It said there is a lot of research to recommend masking. I responded that there is a Cochrane meta-analysis on that. It said "well, the Cochrane meta-analysis has been criticized, but really, masking is a personal choice, do… twitter.com/i/web/status/1…
Read 4 tweets
May 13
ChatGPT can identify bias in Wikipedia.

I wrote "Can you summarize the bias in the article below?" and pasted the Wikipedia article on @BretWeinstein.

It identified without any specific hints from me that the COVID-19 coverage was biased. Image
I did the same with the wikipedia page on @DrJBhattacharya.

Again, ChatGPT was easily able to diagnose the bias with a highly detailed report: Image
The Wikipedia page on @RWMaloneMD is not much better.

ChatGPT easily spots the bias. This time, I asked for more details. Image
Read 6 tweets
Apr 24
Prompting ChatGPT helped me understand something important about creativity. Or so I think.

ChatGPT produces bland writing, right? Even though at the same time it's eerily good at coding.

Why is that?

Maybe because it's not too proud to try the safe/boring solution.

🧵👇
If that's true, can we use that insight to make it produce more interesting writing? Time for some...

PROMPT SCIENCE

See what I did, and please try to replicate and tell me what you got. You can try it on gpt3/4 or some of the exotic models out there, it's all good.
So my hypothesis was that the saying that constraints breed creativity was literally true.

So what I did was first ask ChatGPT to write me a paragraph about the sun.

Result? Not bad, not great.

Now, let's make it interesting. Image
Read 13 tweets
Apr 22
Ask me ANY question about ChatGPT, Generative AI, or Large Language Models (LLMs) and I will do my best to answer it*

*while supplies last
Good question!

I think if the human takes responsibility for the work, they should also get the credit. Ghostwriters are not a new thing, afterall.
Copilot X is using very similar tech to ChatGPT4, so it kinda depends what you're doing with it.
Read 22 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(