Marion Campbell Profile picture
Nov 28, 2022 7 tweets 7 min read Read on X
We are all being rightly encouraged to be #efficient in our trial design & conduct. Efficiency comes primarily through design choices … whether classic or more modern efficient designs … a few reflections below 1/7
#MethodologyMonday
A #crossover design can be highly efficient. Each person acts as their own control removing large element of variation, making the design more powerful. The outcome needs to be short term however & the intervention can’t have a long-standing effect 2/7
bmj.com/content/316/71… Image
This is particularly the case when a cluster design is also in play. A #ClusterCrossover design can majorly reduce the sample size requirements compared with a std cluster design. A good primer on this was published by @karlahemming and colleagues 3/7
bmj.com/content/371/bm… ImageImage
Adopting a #factorial or partial factorial design also gives major reductions in sample size. Interventions need to have independent mechanisms of actions but if so, evaluating in a factorial design effectively allows 2 trials for the size of one 4/7
See: bmcmedresmethodol.biomedcentral.com/articles/10.11… Image
Modern #platform trials allow multiple interventions to be evaluated using a common underpinning framework. It also extends the factorial concept allowing randomisation to more than one intervention where possible maximising efficiency. 5/7 See:
jamanetwork.com/journals/jama/… Image
A super-efficient design is the #MAMS (multi arm, multi stage trial design) developed by @MRCCTU. #MAMS incorporates both platform & adaptive concepts allowing high efficiency & rapid adaptation as needed. A useful explainer on #MAMS is here 6/7: mrcctu.ucl.ac.uk/our-research/m… Image
Finally, you will never be totally efficient if you do not attend to your trial #retention! Retention is crucial to ensure an optimal analysis. @GilliesKatie leads a programme of research on optimising trial retention. Follow her for insights on this 7/7
cochrane.org/news/blog-rete… Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Marion Campbell

Marion Campbell Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MarionKCampbell

May 13
A popular, but often misused design, is the #Crossover trial design. But what are the key things to look out for if you are considering using it? 1/9 #MethodologyMonday #87
In a crossover trial each participant receives two (or more) treatments in a random order. The most common design is an AB/BA design (2 treatment, 2 period design) which randomises half the sample to receive treatment A first then B and the other half to B first then A. 2/9
Because each person acts as their own control this removes a large element of variation, making the design more powerful than a standard parallel group study 3/9
Read 9 tweets
Mar 25
In clinical trials, we sometimes undertake #SensitivityAnalysis alongside the main primary analysis. But when should we use them and to what purpose? 1/8
#MethodologyMonday #80
Sensitivity analyses can assess the impact of key elements/assumptions of a trial on the result eg impact of any baseline imbalance, impact of the choice of analysis approach etc 2/8
If the different sensitivity analyses provide similar results one is reassured that the trial result is robust and thus the credibility of the trial findings are increased 3/8
Read 8 tweets
Mar 4
I have spoken about “usual care” or “treatment as usual” as a control arm in trials before, but should you ever protocolise usual care or just measure it as is? 1/8
#MethodologyMonday #77
Whilst “usual care” implies a common package of care being applied across sites, there is often a high degree of heterogeneity in care provided - but many would argue that the heterogeneity will increase the external validity of the trial results 2/8
However, heterogeneity in the usual care control group may affect the internal validity of the trial. It can affect the effect size and can make the trial result hard to interpret. 3/8
Read 8 tweets
Jun 5, 2023
I have spoken about the importance of minimally clinically important differences (#MCID)
before in relation to sample sizes but how do you decide what it should be? There was an interesting paper published this week adding to this literature 1/8
#MethodologyMonday
The MCID drives the sample size - it is the minimum clinically important difference you set your trial to detect. Set the MCID too small and the sample size will be much larger than needed; but make the MCID too big & your trial will miss clinically important effects 2/8
There are different methods of calculating #MCIDs - the DELTA project is a great resource in this regard. 3/8
DELTA: journalslibrary.nihr.ac.uk/hta/hta18280/#… Image
Read 8 tweets
May 29, 2023
While 1:1 randomisation to interventions is most common in clinical trials, sometimes #UnequalRandomisation is used. There are a number of factors that influence which randomisation ratio to use 1/9
#MethodologyMonday
One justification for unequal randomisation is when there is a substantial difference in cost of treatments. In this scenario, randomising unequally with fewer to the very expensive arm maximises efficiency when a trial has set resources 2/9
bmj.com/content/321/72… Image
Another is if you are undertaking an early investigation of a new treatment and need to have greater insights into its underlying safety/benefit profile. Here, increasing allocation to the new treatment will provide greater precision around these estimates 3/9
Read 9 tweets
Apr 24, 2023
One phenomenon that can affect clinical trials is the #HawthorneEffect. This is when purely being involved in a trial can improve performance. 1/9
#MethodologyMonday
The #HawthorneEffect was named after a famous set of experiments at the Hawthorne Western Electric plant, Illinois in the 1920/30s. 2/9
In one experiment lighting levels were repeatedly changed & with each change, productivity increased .. even when reverting to poorer lighting. This was attributed to workers knowing their work was being observed. Productivity returned to normal after the experiments ended 3/9
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(