@sim0ngates@chrisdc77@hafetzj@eturnermd1 Yeah, I thought about (and should have said something about) the distinction between industry funded vs academic sponsored trials. The exact process is a bit different but the challenges would be similar-ish. Agree that industry/regulatory bodies would have to be on board.
@sim0ngates@chrisdc77@hafetzj@eturnermd1 Of course the easiest way to make this happen would be for the major regulators to make it happen. But as Chris (I think?) said a little while ago, this was evidently part of the original discussion for clinical trials dot gov but they didn’t go all the way to RRs.
@sim0ngates@chrisdc77@hafetzj@eturnermd1 I think some academic trialists might be persuaded or at least attracted by the idea that they could have a much-expedited peer review process on the back end. If can be frustrating to do a trial, write up your results & then spend another year submitting to 3 different journals
@sim0ngates@chrisdc77@hafetzj@eturnermd1 It would create a little but more work on the front end (though as Chris alludes if you integrated the DSMB and IRB initial protocol reviews with the RR process somehow this could end up washing out I think) but IMO at the end just submitting your results & being published…
@sim0ngates@chrisdc77@hafetzj@eturnermd1 …would be really nice versus going through a couple different journals and sometimes getting questions that are a bit tedious or asking you to do different analyses
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Thread on relationships between researchers and statistical consultants. Prompted by a few recent tweets, but not only those as this is a recurring and always-relevant conversation.
On the "researcher seeking stats help" side, there is an often-justified feeling that statistical consultants are difficult to work with (even those in good faith) and sometimes downright unhelpful or unpleasant.
So - let's address those right up front as part of this thread about making these relationships productive & relatively happy.
Has anyone in *medicine* (or otherwise, but particularly interested in US academic medicine) actually proposed a study where they said they'd use an alpha threshold above 0.05? How was it received? (cont)
(Also, please do me a favor, spare me the arguments about NHST being a flawed paradigm on this particular thread)
Clearly not all studies have the same tradeoffs of a false-positive vs a false-negative finding, and in some cases a higher alpha threshold seems like it should be warranted...
@Jabaluck@_MiguelHernan@aecoppock I think (perhaps unsurprisingly) that this shows “different people from different fields see things differently because they work in different contexts” - the scenario you painted here is not really possible with how most *medical* RCTs enroll patients & collect baseline data
@Jabaluck@_MiguelHernan@aecoppock The workflow for most medical RCTs (excepting a few trial designs…which I’ll try to address at the end if I have time) is basically this:
@Jabaluck@_MiguelHernan@aecoppock 1. Clinics/practices/hospitals know that they are enrolling patients in such-and-such trial with such-and-such criteria.
Amusing Friday thoughts: I've been reading Stuart Pocock's 1983 book Clinical Trials: A Practical Approach (do not concern yourself with the reason).
There is a passage on "Statistical Computing" in Chapter 11 of the book which one might have expected would age poorly, but is in fact remarkable for how well several of the statements have held up.
"I would like to refer briefly to the frequent misuse of statistical packages. Since they make each analysis task so easy to perform, there is a real danger that the user requests a whole range of analyses without any clear conception of what he is looking for."
Fun thread using some simulations modeled on the ARREST trial design (presented @CritCareReviews a few months ago) to talk through some potential features you might see when we talk about “adaptive” trials
DISCLAIMER: this is not just a “frequentist” versus “Bayesian” thread. Yes, this trial used a Bayesian statistical approach, but there are frequentist options for interim analyses & adaptive features, and that’s a longer debate for another day.
DISCLAIMER 2: this is just a taste using one motivational example for discussion; please don’t draw total sweeping generalizations about “what adaptive trials do” from this thread, as the utility of each “feature” must always be carefully considered in that specific context
Here is a little intro thread on how to do simulations of randomized controlled trials.
This thread will take awhile to get all the way through & posted, so please be patient. Maybe wait a few minutes and then come back to it.
This can be quite useful if you’re trying to understand the operating characteristics (power, type I error probability, potential biases introduced by early stopping rules) of a particular trial design.
I will use R for this thread. It is free. I am not interested in debates about your favorite stats program at this time.
If you want to do it in something else, the *process* can still be educational; you’ll just have to learn to mimic this process in your preferred program.