Discover and read the best of Twitter Threads about #methodologymonday

Most recents (16)

I have spoken about the importance of minimally clinically important differences (#MCID)
before in relation to sample sizes but how do you decide what it should be? There was an interesting paper published this week adding to this literature 1/8
#MethodologyMonday
The MCID drives the sample size - it is the minimum clinically important difference you set your trial to detect. Set the MCID too small and the sample size will be much larger than needed; but make the MCID too big & your trial will miss clinically important effects 2/8
There are different methods of calculating #MCIDs - the DELTA project is a great resource in this regard. 3/8
DELTA: journalslibrary.nihr.ac.uk/hta/hta18280/#… Image
Read 8 tweets
While 1:1 randomisation to interventions is most common in clinical trials, sometimes #UnequalRandomisation is used. There are a number of factors that influence which randomisation ratio to use 1/9
#MethodologyMonday
One justification for unequal randomisation is when there is a substantial difference in cost of treatments. In this scenario, randomising unequally with fewer to the very expensive arm maximises efficiency when a trial has set resources 2/9
bmj.com/content/321/72… Image
Another is if you are undertaking an early investigation of a new treatment and need to have greater insights into its underlying safety/benefit profile. Here, increasing allocation to the new treatment will provide greater precision around these estimates 3/9
Read 9 tweets
One phenomenon that can affect clinical trials is the #HawthorneEffect. This is when purely being involved in a trial can improve performance. 1/9
#MethodologyMonday
The #HawthorneEffect was named after a famous set of experiments at the Hawthorne Western Electric plant, Illinois in the 1920/30s. 2/9
In one experiment lighting levels were repeatedly changed & with each change, productivity increased .. even when reverting to poorer lighting. This was attributed to workers knowing their work was being observed. Productivity returned to normal after the experiments ended 3/9
Read 9 tweets
This week some of my discussions have centred on #ClusterTrials. Cluster trials involve the randomisation of intact units (wards, hospitals, GP practices etc) rather than individuals. They have a number of key elements that must be accounted for 1/11
#MethodologyMonday
There are very good reasons for cluster/group randomisation eg when evaluating interventions like clinical guidelines or educational interventions which apply at practice/hospital level; or when there is potential of contamination of the intervention across trial groups 2/11
However, cluster randomisation has some major impacts for design & analysis primarily because observations within a cluster are not independent (outcomes are likely to be more similar within a cluster) 3/11
Read 11 tweets
The first step in a clinical trial is deciding the #ResearchQuestion. Knowing which question is most important to focus on may not be clear cut. An interesting paper was recently published which developed a tool to rank the importance of research questions 1/7
#MethodologyMonday
This tool was developed for the musculo-skeletal field (ANZMUSC-RQIT), but the concepts are highly likely to be transferable to other fields 2/7
journals.plos.org/plosone/articl…
The tool identified 5 domains to be ranked:1) extent of stakeholder consensus, 2) social burden of health condition, 3) patient burden of health condition, 4) anticipated effectiveness of proposed intervention, and 5) extent to which health equity is addressed by the research 3/7
Read 7 tweets
There was an interesting paper this week on different stakeholders understanding of the concept of #equipoise. Equipoise is an essential concept in clinical trials but is often not well understood 1/8
#MethodologyMonday
For it to be ethical to randomise in a trial, it is important that there is uncertainty which treatment is best 2/8
Originally uncertainty (equipoise) had to be at the level of the individual clinician but was refined to uncertainty at the professional community level by Freedman in the 1980s 3/8
nejm.org/doi/full/10.10…
Read 8 tweets
Love #methodologymonday plz keep them coming! With factorial designs, there are competing schools of thought, particularly around the assumption that the treatments do not interact. My understanding is that when effect coding (-1,+1) is used, this assumption does not apply [1/?]
Effect coding allows you to interpret all main effects and interactions independently of one another, making the factorial design very efficient for building complex interventions where we may anticipate interactions.
.@collins_most discusses effect vs. dummy coding extensively in her excellent textbook: books.google.com/books/about/Op…
Read 6 tweets
The #FactorialTrial design is one of the very original efficient trial designs yet its potential often remains underused 1/7
#MethodologyMonday
In a #FactorialTrial you can evaluate the effectiveness of more than one treatment simultaneously and for the same sample size requirements as doing a single trial 2/7
bmcmedresmethodol.biomedcentral.com/articles/10.11…
For a factorial trial of say 2 treatments, patients are allocated to 1 of 4 groups: Gp1 receives both treatments A and B; Gp2 receives only A; Gp3 receives only B; and Gp4 receives neither A nor B (the control) 3/7
cambridge.org/core/services/…
Read 7 tweets
A clinical trial design that is often misunderstood is the #NonInferiority clinical trial design. 1/8
#MethodologyMonday
Mostly we set up trials to test if a new treatment is better than another (ie we test for superiority) but in a #NonInferiority design we wish to test if a treatment is not unacceptably worse than a comparator. 2/8
The main reasons why we might look for non-inferiority is when an alternative treatment is say much cheaper, or has fewer side effects … but we would only wish to use it if the benefits of the standard treatment are not significantly compromised. 3/8
onlinelibrary.wiley.com/doi/full/10.10…
Read 8 tweets
Having spent the last couple of weeks discussing composite & surrogate outcomes, I was reminded this week of the importance of thoughtful planning on the choice of outcomes in the first place 1/6
#MethodologyMonday
In particular I was reminded of the fundamental work of #Donabedian to conceptualise what is important to measure to assess quality of (and improvement in) health care. Although developed decades ago, it remains just as relevant today 2/6
jamanetwork.com/journals/jama/…
When we seek to assess the impact of a new intervention on care, the Donabedian model suggests there are 3 elements that may be impacted - the #structure the #process and the #outcome of care 3/6
Read 6 tweets
Last week I discussed composite endpoints and how while they can be useful, they can also be fraught with difficulty. The same descriptors could equally be applied to #SurrogateOutcomes in clinical trials 1/9
#MethodologyMonday
A #SurrogateOutcome is a substitute measure (eg blood pressure) that one might use to stand in for the real outcome of interest (eg stroke) when the real outcomes of interest may take a very long time to measure - to allow trials to be completed more quickly & efficiently 2/9
Surrogate outcomes can take many forms and may be histological, physiological, radiological etc … biomarkers that predict events 3/9
Read 9 tweets
Choosing the right outcome is key to a clinical trial. Sometimes a #CompositeOutcome
- an outcome that combines more than one dimension into a single measure - is felt to be most appropriate. These can be useful but can be fraught with difficulty 1/8
#MethodologyMonday
One of the primary reasons for using a composite outcome is trial efficiency - you can get more events quickly compared to the individual components thus increasing precision and efficiency in sample size calculations 2/8
jamanetwork.com/journals/jama/…
However, the validity of a composite relies on consistency of the individual components -see Montori et al 3/8

bmj.com/content/330/74…
Read 8 tweets
Given the complexity of delivering clinical trials, they are a fertile ground to gain from #interdisciplinary thinking. For example, the field of trial #recruitment has already gained enormously from insights from other disciplinary approaches 1/7
#MethodologyMonday
A recent paper highlighted the use of #StatedPreference methods in this space. It showed aspects of trial design can affect recruitment 2/7

bmcmedresmethodol.biomedcentral.com/articles/10.11…
#StatedPreference methods eg discrete choice experiments are more commonly used by health economists to value and quantify aspects of health care but can be used to determine preference priorities in any domain 3/7

link.springer.com/article/10.216…
Read 7 tweets
It’s good to start a new year getting the basics right. It’s the same with methods; important not to slip into using common errors.The recent Xmas BMJ paper which showed the most common stats/methods errors is a great place to start 1/7
#MethodologyMonday

bmj.com/content/379/bm…
The BMJ stats editors highlighted the top 12 most common stats errors they come across. They are summarised in a neat infographic 2/7
All are important,but a couple particularly resonate. One is “dichotomania” (the term coined by Stephen Senn for this) where a perfectly good continuous measure eg blood pressure or weight is then arbitrarily dichotomised into two categories - good/bad; high/low etc 3/7
Read 7 tweets
Following the paper noted this week to have just added capital “T”s to a graph to depict standard errors 😱🤯, a short note on the importance of accurate data visualisation in any research report … 1/8
#MethodologyMonday
This was the tweet & thread which highlighted T-gate. There are lots of other issues with that paper, but data visualisation is a core element 2/8
The paper had attempted to use a #DynamitePlot (sometimes known as a Plunger Plot) to display the data. Even without adding T’s there are major issues with dynamite plots and frankly most statisticians would like them consigned to history! 3/8
Read 8 tweets
We are all being rightly encouraged to be #efficient in our trial design & conduct. Efficiency comes primarily through design choices … whether classic or more modern efficient designs … a few reflections below 1/7
#MethodologyMonday
A #crossover design can be highly efficient. Each person acts as their own control removing large element of variation, making the design more powerful. The outcome needs to be short term however & the intervention can’t have a long-standing effect 2/7
bmj.com/content/316/71… Image
This is particularly the case when a cluster design is also in play. A #ClusterCrossover design can majorly reduce the sample size requirements compared with a std cluster design. A good primer on this was published by @karlahemming and colleagues 3/7
bmj.com/content/371/bm… ImageImage
Read 7 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!