Profile picture
Andrew Althouse @ADAlthousePhD
, 76 tweets, 12 min read Read on Twitter
OK, folks. Let’s all sit down for a quick chat about this article the “Annals of Medicine” coughed up a few weeks ago about RCT’s:…
This will be my longest TWEETORIAL yet, but that’s because we have a LOT to talk about here
Several others have discussed this recently on Twitter and I welcome their thoughts as well (@statsepi, @stephensenn, @briandavidearp, @Prof_Livengood, @learnfromerror)
I suspect most of them would prefer not to give this article a second thought other than “Ugh, these tired old arguments again?”
But I do worry about the influence of a poorly-informed article with a flashy title (“Why all randomised controlled trials produce biased results”) going viral with the like-n-share treatment
And when I see this getting shared / covered / promoted by a credible source like BMJ
I’d like to issue some clarifications for those who may not spot the issues quite so easily
Doug Altman and @stephensenn already issued a brief comment in the BMJ, hoping to minimize the damage (…)
I’d like to expand on their response, though, and hopefully disseminate this across another audience of readers in #medtwitter
Most of what I say here will not be original content. Several of the foremost statisticians and trialists have written about most of this before.
One would hope that an LSE post-doc writing an article with such a definitive-sounding title would have read something of the history of the subject
Before we go further, let me say this: I do not know the author. He is probably a very nice person. Most people are. He is also probably a very smart person. I am not under the impression that LSE is a factory of dimwits
Unfortunately, being a very nice person, and even being a very smart person, does not guarantee that one is sufficiently qualified to offer an intelligent opinion on all subjects
And in this particular case, it appears that the author outside his particular bounds of expertise, and in doing so written a potentially inflammatory and damaging article based on misguided beliefs about the conduct and intent of RCT’s
So let’s go through a few key points:
Also: this is the “first study to examine that hypothesis”
Um, I am sorry to burst your bubble, but people have written about trial methodology once or twice before
The study identifies a number of “novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies”
Protip to all you kids out there
If you want to write an article with a provocative title in a major research area, it’s usually a good idea to read one or two things about that research area
Again: NO ONE SAYS TRIALS ARE EXEMPT FROM THESE THINGS! If anything, trialists are MORE STRICT about this stuff than anyone
Oh, good, let’s pick 10 trials and use that to make a sweeping broad conclusion about “Why all randomized controlled trials produce biased results”
Again, I think we’re arguing a strawman here. EVERYONE involved in trials thinks that their strengths and limitations should be carefully scrutinized and discussed.
Yes, we know randomization is largely infeasible for answering some scientific questions. That’s not an argument against using it where we CAN answer specific questions
It’s almost like the author hasn’t read anything about clinical trials since the 1990’s
But hey, why actually READ about such innovative clinical trials when you’re busy writing a piece that they can’t do something?
Kind of got a point here. But any experienced trialist knows to comment on this. Trial findings ALWAYS need to be kept in appropriate context of the patients/population that were actually enrolled in the trial
Oh dear. So many people believe this is a thing. Folks, this isn’t a thing. Take it from @stephensenn
No, randomization is not guaranteed to ensure a balanced distribution. Nor is that a condition for valid statistical inference from trials.
Oh, for the love of…re-randomization (the way it’s described here) is only possible if we have all of the units being randomized available
Raise your hand if you’ve ever been part of a trial where this was done, or even remotely feasible
Someone actually did give an example the other day; it CAN happen; it’s also EXTREMELY rare in medical trials
Remember, this is published in Annals of Medicine. And it brings up 10 trials from the field of medicine. Anyone care to guess how many of those 10 trials this would have been remotely feasible for?
The acute ischemic stroke trial enrolled from January 1991 to October 1994
Those poor people from January 1991. They’d have to wait three years to get their randomization!
That’s a long time to wait for stroke treatment
OK. This is, like, kind of half partway right. Underpowered trials can be a problem. But small trials aren’t necessarily more BIASED than large trials. They produce a less precise estimate of treatment effect. That is not the same thing as bias.
Um. No. Those are not the same thing. Those are not even close to the same thing.
The probability of 317 heads / 307 tails in 624 tosses is NOT EVEN CLOSE to the same calculation as the probability of the trial results described here
If you think they are the same, let me know and I’ll work out the math for each & post here
Yes. We should report the primary outcome and secondary outcomes. All good trials do this. If the incidence of adverse events counterbalances the other benefits, it will be commented on. Straw man.
Um. Some trials are “efficacy” trials (does this work under a tightly controlled setting) while some are “effectiveness” (does this work in the real world, the way it’s going to be used?) - and both have value.
Sigh. We have statistical methods to account for this.
While the average treatment effect is often the primary finding reported, a modern generation of trials is working on ways to provide tailored estimates of treatment effect to specific groups / patients
Further, I believe the ATE is generally transportable (@f2harrell is fond of saying)
“Here are a bunch of other words about things vaguely related to trials”
And we’re back to this again. Sigh. Zero of the 10 trials in the, uh, “systematic review” thingy here could have done this. And, remember kids, it’s still not a necessary condition for valid statistical inference from RCT’s.
OK. Now I’ve actually exhausted the other points I had. In summary:
Trials are not unimpeachable. They are indeed subject to assumptions, biases, and limitations. They also provide the strongest evidence we have for many medical questions.
If you want to “come at” trials, please, at least use actual valid criticisms. Inventing a term and putting in italics does not make it a valid criticism.
For a closing laugh, consider the circularity of someone calling out trials for small sample size & problems with generalizability that wrote a paper with “all trials” in the title and included 10 trials for critical review
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Andrew Althouse
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!