@Jabaluck @_MiguelHernan @aecoppock I think (perhaps unsurprisingly) that this shows “different people from different fields see things differently because they work in different contexts” - the scenario you painted here is not really possible with how most *medical* RCTs enroll patients & collect baseline data
@Jabaluck @_MiguelHernan @aecoppock The workflow for most medical RCTs (excepting a few trial designs…which I’ll try to address at the end if I have time) is basically this:
@Jabaluck @_MiguelHernan @aecoppock 1. Clinics/practices/hospitals know that they are enrolling patients in such-and-such trial with such-and-such criteria.
@Jabaluck @_MiguelHernan @aecoppock 2. As potentially eligible patients present themselves, the designated staff may approach them and inquire whether they are interested in being part of the aforementioned trial.
@Jabaluck @_MiguelHernan @aecoppock 3. They will perform an eligibility assessment and confirm that the patient is eligible for the trial. Depending on how complex the eligibility criteria are, this may be done on the spot or may take days / weeks.
@Jabaluck @_MiguelHernan @aecoppock (Notice that at this point we haven’t even started talking about their treatment assignment or the randomization process yet)
@Jabaluck @_MiguelHernan @aecoppock 4. OK, so now the patient has been confirmed as “eligible” for the trial. In most modern RCTs this data is being recorded in an electronic data management system. Once the data are entered to confirm eligibility - again, all of this before anyone knows treatment assignment…
@Jabaluck @_MiguelHernan @aecoppock 4b. Those data are now locked and cannot be modified without filing a data change request form with the data coordinating center. There’s no monkeying with this data because they found out the patients treatment and decided to change something
@Jabaluck @_MiguelHernan @aecoppock 5. Now that the patient has been confirmed eligible, consented to be in the trial, any baseline data of interest should be collected. The patient *still* hasn’t actually been randomized yet. We want to collect as much of this data as possible before randomizing them…
@Jabaluck @_MiguelHernan @aecoppock 6. Only after the patient has been confirmed eligible and had most/all baseline data recorded and entered are they randomized in the trial. How does this part work?
@Jabaluck @_MiguelHernan @aecoppock 6b. The randomization is separately prepared, hidden from view of the staff (allocation concealment is a basic principle of trial implementation in medicine - the person enrolling patients in the trial should have no idea what the next assignment is, to avoid precisely this)
@Jabaluck @_MiguelHernan @aecoppock 6c. In this way we can assure that the eligibility & baseline data are unaffected by the knowledge of tx assignment - it’s all entered before the patient is assigned & by someone that has no ability to see what the next assignment is until the data are entered and locked
@Jabaluck @_MiguelHernan @aecoppock (Depending on the exact trial design - level of blinding possible - it’s possible they won’t even know *after* the patient is randomized what they are assigned to - but even if an open label trial, all of the baseline data has to be entered & confirmed before anyone knows the tx)
@Jabaluck @_MiguelHernan @aecoppock The point is, in most (modern, medical-environment) RCTs, there are no lists being merged or anything of the sort you’re worried about here. The data for each patient are being entered directly, in real time, before knowledge of tx assignment, and locked…
@Jabaluck @_MiguelHernan @aecoppock …and so can only be “changed” by filing a data change request (for even the smallest possible thing, like “I typed 90 and it should have been 80” - still requires a formal DCR to the data coordinating center for us to unlock and change that value)
@Jabaluck @_MiguelHernan @aecoppock All of this to say that so often these across-field “wait you do it that way? why?” discussions often eventually show that, surprise, there are other differences in the way we see and do things which influence them.
@Jabaluck @_MiguelHernan @aecoppock And I’ll admit that I’m quite skeptical running a few baseline t-tests will be a particularly sensitive method for detecting errors of the type: “the research staff screwed with the data and/or merged the data wrong”
@Jabaluck @_MiguelHernan @aecoppock Oh, also, there have been some interesting discussions over time about the merits of using “simple randomization” (every patient assigned with 50% probability, fully independent of past or future assignments) versus blocked randomization and whether the efficiency gains with…
@Jabaluck @_MiguelHernan @aecoppock …blocked randomization are worth the potential for research staff possibly inferring what the next treatment assignment will be based on the prior assignment(s). There are of course various safeguards you can put in place to protect against this as well…
@Jabaluck @_MiguelHernan @aecoppock …such as randomly permuted block sizes, or having multiple strata (which makes it harder for the staff at any one site to really follow the sequence and guess what’s next at their site) but some argue that you can avoid all that by using (what we call!) simple randomization

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Andrew Althouse

Andrew Althouse Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ADAlthousePhD

1 Sep
Riddle me this, stats/medicine people.

I know about Justify Your Alpha.

Has anyone in *medicine* (or otherwise, but particularly interested in US academic medicine) actually proposed a study where they said they'd use an alpha threshold above 0.05? How was it received? (cont)
(Also, please do me a favor, spare me the arguments about NHST being a flawed paradigm on this particular thread)
Clearly not all studies have the same tradeoffs of a false-positive vs a false-negative finding, and in some cases a higher alpha threshold seems like it should be warranted...
Read 7 tweets
16 Jul
Amusing Friday thoughts: I've been reading Stuart Pocock's 1983 book Clinical Trials: A Practical Approach (do not concern yourself with the reason).
There is a passage on "Statistical Computing" in Chapter 11 of the book which one might have expected would age poorly, but is in fact remarkable for how well several of the statements have held up.
"I would like to refer briefly to the frequent misuse of statistical packages. Since they make each analysis task so easy to perform, there is a real danger that the user requests a whole range of analyses without any clear conception of what he is looking for."
Read 6 tweets
11 Mar
Fun thread using some simulations modeled on the ARREST trial design (presented @CritCareReviews a few months ago) to talk through some potential features you might see when we talk about “adaptive” trials

DISCLAIMER: this is not just a “frequentist” versus “Bayesian” thread. Yes, this trial used a Bayesian statistical approach, but there are frequentist options for interim analyses & adaptive features, and that’s a longer debate for another day.
DISCLAIMER 2: this is just a taste using one motivational example for discussion; please don’t draw total sweeping generalizations about “what adaptive trials do” from this thread, as the utility of each “feature” must always be carefully considered in that specific context
Read 45 tweets
25 Jan
Here is a little intro thread on how to do simulations of randomized controlled trials.

This thread will take awhile to get all the way through & posted, so please be patient. Maybe wait a few minutes and then come back to it.
This can be quite useful if you’re trying to understand the operating characteristics (power, type I error probability, potential biases introduced by early stopping rules) of a particular trial design.
I will use R for this thread. It is free. I am not interested in debates about your favorite stats program at this time.

If you want to do it in something else, the *process* can still be educational; you’ll just have to learn to mimic this process in your preferred program.
Read 47 tweets
30 Oct 20
Here’s a brief follow-up thread answering a sidebar question to the last 2 weeks’ threads on interim analyses in RCT’s and stopping when an efficacy threshold is crossed
The “TL;DR” summary of the previous lesson(s): yes, an RCT that stops early based on an efficacy threshold will tend to overestimate the treatment effect a bit, but that doesn’t actually mean the “trial is more likely to be a false positive result”
(Also, it seems that this is generally true for both frequentist and Bayesian analyses, though the prior may mitigate the degree to which this occurs in a Bayesian analysis)
Read 18 tweets
16 Oct 20
As promised last week, here is a thread to explore and explain some beliefs about interim analyses and efficacy stopping in randomized controlled trials.
Brief explanation of motivation for this thread: many people learn (correctly) that randomized trials which stop early *for efficacy reasons* will tend to overestimate the magnitude of a treatment effect.
This sometimes gets mistakenly extended to believing that trials which stopped early for efficacy are more likely to be “false-positive” results, e.g. treatments that don’t actually work but just got lucky at an early interim analysis.
Read 62 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(