We are all being rightly encouraged to be #efficient in our trial design & conduct. Efficiency comes primarily through design choices … whether classic or more modern efficient designs … a few reflections below 1/7 #MethodologyMonday
A #crossover design can be highly efficient. Each person acts as their own control removing large element of variation, making the design more powerful. The outcome needs to be short term however & the intervention can’t have a long-standing effect 2/7 bmj.com/content/316/71…
This is particularly the case when a cluster design is also in play. A #ClusterCrossover design can majorly reduce the sample size requirements compared with a std cluster design. A good primer on this was published by @karlahemming and colleagues 3/7 bmj.com/content/371/bm…
Adopting a #factorial or partial factorial design also gives major reductions in sample size. Interventions need to have independent mechanisms of actions but if so, evaluating in a factorial design effectively allows 2 trials for the size of one 4/7
See: bmcmedresmethodol.biomedcentral.com/articles/10.11…
Modern #platform trials allow multiple interventions to be evaluated using a common underpinning framework. It also extends the factorial concept allowing randomisation to more than one intervention where possible maximising efficiency. 5/7 See: jamanetwork.com/journals/jama/…
A super-efficient design is the #MAMS (multi arm, multi stage trial design) developed by @MRCCTU. #MAMS incorporates both platform & adaptive concepts allowing high efficiency & rapid adaptation as needed. A useful explainer on #MAMS is here 6/7: mrcctu.ucl.ac.uk/our-research/m…
Finally, you will never be totally efficient if you do not attend to your trial #retention! Retention is crucial to ensure an optimal analysis. @GilliesKatie leads a programme of research on optimising trial retention. Follow her for insights on this 7/7 cochrane.org/news/blog-rete…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A popular, but often misused design, is the #Crossover trial design. But what are the key things to look out for if you are considering using it? 1/9 #MethodologyMonday #87
In a crossover trial each participant receives two (or more) treatments in a random order. The most common design is an AB/BA design (2 treatment, 2 period design) which randomises half the sample to receive treatment A first then B and the other half to B first then A. 2/9
Because each person acts as their own control this removes a large element of variation, making the design more powerful than a standard parallel group study 3/9
In clinical trials, we sometimes undertake #SensitivityAnalysis alongside the main primary analysis. But when should we use them and to what purpose? 1/8
#MethodologyMonday #80
Sensitivity analyses can assess the impact of key elements/assumptions of a trial on the result eg impact of any baseline imbalance, impact of the choice of analysis approach etc 2/8
If the different sensitivity analyses provide similar results one is reassured that the trial result is robust and thus the credibility of the trial findings are increased 3/8
I have spoken about “usual care” or “treatment as usual” as a control arm in trials before, but should you ever protocolise usual care or just measure it as is? 1/8
#MethodologyMonday #77
Whilst “usual care” implies a common package of care being applied across sites, there is often a high degree of heterogeneity in care provided - but many would argue that the heterogeneity will increase the external validity of the trial results 2/8
However, heterogeneity in the usual care control group may affect the internal validity of the trial. It can affect the effect size and can make the trial result hard to interpret. 3/8
I have spoken about the importance of minimally clinically important differences (#MCID)
before in relation to sample sizes but how do you decide what it should be? There was an interesting paper published this week adding to this literature 1/8 #MethodologyMonday
The MCID drives the sample size - it is the minimum clinically important difference you set your trial to detect. Set the MCID too small and the sample size will be much larger than needed; but make the MCID too big & your trial will miss clinically important effects 2/8
While 1:1 randomisation to interventions is most common in clinical trials, sometimes #UnequalRandomisation is used. There are a number of factors that influence which randomisation ratio to use 1/9 #MethodologyMonday
One justification for unequal randomisation is when there is a substantial difference in cost of treatments. In this scenario, randomising unequally with fewer to the very expensive arm maximises efficiency when a trial has set resources 2/9 bmj.com/content/321/72…
Another is if you are undertaking an early investigation of a new treatment and need to have greater insights into its underlying safety/benefit profile. Here, increasing allocation to the new treatment will provide greater precision around these estimates 3/9
One phenomenon that can affect clinical trials is the #HawthorneEffect. This is when purely being involved in a trial can improve performance. 1/9 #MethodologyMonday
The #HawthorneEffect was named after a famous set of experiments at the Hawthorne Western Electric plant, Illinois in the 1920/30s. 2/9
In one experiment lighting levels were repeatedly changed & with each change, productivity increased .. even when reverting to poorer lighting. This was attributed to workers knowing their work was being observed. Productivity returned to normal after the experiments ended 3/9