Their two conclusions:
-p1: "meta-analysis suggested that blinding was unsuccessful among participants and investigators."
-p2: "patients or assessors were unlikely to judge which treatment the patients were on."
What explains the difference? Its immediately apparent that they analyzed different trials: p1 identified 7 trials, while p2 identified 9 and there are only 2 (!!!) trials that are included in both analysis. This difference is due to differences in inclusion/exclusion criteria.
Few other differences:
- p1 uses Bang's blinding index, while p2 uses kappa statistics
- p1 excludes double dummy trials, p2 includes them
- p2 analyzes trails 2000-2020 only, p1 has no time restriction
- p1 includes trials with substance use comorbidity, p2 excludes them
I have strong feelings about some of these analysis decision, but not for all. Lets just agree that for all decision points listed above either option can be reasonably justified. The important bit is that these seemingly small differences lead to polar opposite conclusions.
What's the way forward? IMO either we argue over each of these analysis decisions until we form a consensus and come up with the one and only True analysis OR
we embrace the multiplicity and focus on the robustness of results: instead of presenting results of a single analysis with a single set of choices, we should explore all 'choice configurations', i.e. the 'space of reasonable analysis' (eg: tinyurl.com/52ftp9kk)
For example, above I gave you 4 analysis decision points, leading to 2^4=16 reasonable analysis (note, there are other factors as well, thus, realistically there is a much larger number of reasonable analysis options).
Why not examine all 16 (or even more models) and assess in what % of the cases blinding worked? Such robustness analysis reduces the influence of each individual analysis decisions and encourages a big picture view of the problem.
Right now we have a divergent conclusions from these studies. Maybe the conclusions would converge if they both have explored their respective analysis decision space. Maybe not, but I think it would be worth exploring.
So, are AD trials blinded? My take is that the correct guess rate in AD trials is almost always <60% (50% is a perfectly blinded trial). So even if blinding is broken according to some hypothesis test, the effect is likely to be small.
For comparison, the correct guess rate in #psychedelic#microdosing is ~70% and probably >90% in macrodose trials. Thus, the lack of functional blinding potentially has larger influence on #psychedelic trials compared to conventional #antidepressants RCTs.
ps.: I do like both of these new papers and this🧵is not a criticism, rather it aims to be a constructive thinking out loud of why results are divergent and what can we learn from them
Here is my breakdown of the new #psychedelic#microdosing paper published yesterday (tinyurl.com/mrbhk3xd) that was 🔥🔥🔥 since all these flyers were going around at the wonderland meeting👇🍄💊🚨
Healthy males were randomised into LSD (n=40) and placebo (n=40) groups. They received 14 doses of either 10μg LSD or placebo every 3 days for 6 weeks.
Participants were naïve to MDing, but 70% in both groups had prior psychedelic experience (last use over a year ago), so by in large this was NOT a psychedelic naïve sample.
I am trying to figure out the best approximate dose conversion between #magicmushrooms and #LSD for a study. Based on #science, 100µg LSD ~ 3.2g dried mushroom (see calc 👇). Is this reasonable based on your personal experiences?
The number I cite above is based on two papers. The first one tinyurl.com/muarwac6 says: "we estimate that 4 g of typically available dried mushrooms (P. cubensis) delivers the approximate psychoactive equivalent to 25 mg of psilocybin"
The other is tinyurl.com/mhrv4kpr: "these results suggest that 20 mg psilocybin is equivalent to 100 μg LSD, and 30 mg psilocybin is equivalent to 150 μg LSD, a consistency that was also noted elsewhere [46]. Thus, the dose equivalence of LSD to psilocybin is ~1:200."
Now that #psychedelic trial's lack of blinding is hot again, lets remind ourselves that for most treatments blinding doesn't even work as a concept, eg. mindfulness, psychotherapy, lifestyle interventions etc. Lack of blinding is the rule rather than the exception, a short 🧵
Blinding as a concept only works for pharmacological treatments, but there are many more interventions where 'blinding' does not even work conceptually (see 👆). Nobody is upset that blinding quality is not measured and blinding integrity is not maintained in an exercise trial.
The tension:
-psychedelics are pharmacological (lets set aside therapy for a sec), so blinding should work
-blinding obviously does not work due to strong subjective drug effects
Its an 'unblindable' pharma treatment (at least by conventional methods), which is strange at first
In the @COMPASSPathway trial the 25mg vs 1mg (=placebo) #Psilocybin difference is not only significant statistically, but also clinically. On the MADRS the 'minimal important difference' is ~3-6 points (tinyurl.com/5apzv8h8), the 25mg dose meets this criteria (more context 👇)
The 'minimal important difference' sounds like a low bar to cross, but actually most #antidepressants fail to do so relative to #placebo, tinyurl.com/yut6xyhd - thank you @PloederlM for your work on this, would love to hear your take on this trial!
Lack of blinding remains an issue, but the dose-response relationship should alleviate this concern. As I argued before, #psychedelics macrodose trials most likely will always lack blinding due to obvious drug effects, it is the nature of the intervention.
The recent #psilocybin vs. alcoholism trial used an active placebo (diphenhydramine). Despite this ~94% correctly guessed their treatment, showing that blinding didn't work. IMO this shows that active placebos likely wont solve the blinding issue of #psychedelic trials, a 🧵
Active placebos may have perceivable effects, but these wont confuse most patients, because psychedelics have very specific subjective effects. When a patient experiences drug effects, in most cases its easy to decipher whether its due to a psychedelic or some other drug.
Even if someone is unfamiliar with psychedelic effects going into a trial, modern ethical research standards require doctors to discuss likely effects with patients, making blind breaking that much easier
Despite the importance of #blinding in medical research, most trials don't assess blinding integrity, partially because there is no method to adjust trial results for blinding integrity... until now! New preprint with implications for #microdose and #psychedelic research 🧪🧵👇
First, we define activated expectancy bias (AEB), which is an uneven distribution of expectancy effects between treatment arms due to patients recognizing their treatment allocation. AEB can be viewed as residual expectancy bias not eliminated by the trial’s blinding procedure.
The main idea behind AEB is that if treatment allocation can be deduced by participants, then, treatment expectancy can bias the outcomes in the same manner as it biases non-blinded trials, for example as in open-label trials.