Here is my breakdown of the new #psychedelic#microdosing paper published yesterday (tinyurl.com/mrbhk3xd) that was 🔥🔥🔥 since all these flyers were going around at the wonderland meeting👇🍄💊🚨
Healthy males were randomised into LSD (n=40) and placebo (n=40) groups. They received 14 doses of either 10μg LSD or placebo every 3 days for 6 weeks.
Participants were naïve to MDing, but 70% in both groups had prior psychedelic experience (last use over a year ago), so by in large this was NOT a psychedelic naïve sample.
The long term outcomes (that is difference from baseline to post-treatment) are easy: "No credible evidence of longitudinal changes to traits, mood, or cognition was found", even if blinding integrity is not considered.
For context: doses similar to the 14*10=140μg LSD used to MD here have shown to produce persistent positive effects on anxiety in macrodose therapy setting tinyurl.com/wh3cx8bn
The acute effects are more interesting, here a number of outcomes were positive when ignoring blinding integrity (-angry, +creative, +energy, +happy, -irritable, +well, +connected). This is good news, but look at the effect sizes (supp table 9)!
All outcome was measured on a scale of 100.
The largest difference was observed on the 'happy' scale where the difference is ... a grand total of 3 points! The avg effect size is merely 2 points (supp table 9)! Std effect sizes are not reported, but these are very small effects!
Effect sizes matter when evaluating efficacy, we should always put that next to p-values (tinyurl.com/mrpbtjkh). Effect size particularly matters here, because effects are also transient
Importantly, as the paper notes "the LSD group was partially unblinded". To overcome blind breaking, a subsection of data was re-examined, where participants guessed their drug as 'I do not know', corresponding to a 'blinding worked' subsample.
In this 'blinding worked' subsample 5/7 acute effects are gone (all except 'well' & 'energy') and effect sizes remain tiny (for this 'blinding worked' subsample there is no frequentist analysis, by 'no effect' I mean that the Bayesian credible interval crosses 0; supp fig 7).
🆒side note: in our blinding focused reanalysis of the self-blinding trial (psyarxiv.com/cjfb6/), we concluded "microdosing increases self-perceived energy beyond what is explainable by weak blinding", in sync with these results! (we did not have an item analogous to 'well')
The paper also has some analysis on the difference between what participants expected and then what they experienced, but I found this analysis secondary and not directly relevant to efficacy
As for adverse events, "participants who reported at least one AE in the LSD group was 85.0% and in the placebo group was 80.0%" or 337 vs 216 total adverse events (supp table 7). These numbers are obviously fairly close, but 10% in the LSD group stopped due to adverse events
Most classic antidepressant trials only include dropout rate without the reason for dropping out (could be also due to the adherence inconvenience, lack of efficacy etc.). Thus, I cant compare this 'dropout due to AE' rate to classic ADs - let me know if you know more here
In summary: (1) v small, but significant acute effects if blinding is ignored (2) effects from 1 are mostly gone and remain v small in the truly blind subsample (3) no persistent long term benefits even if blinding is ignored (4) 10% dropout due to adverse events
... exciting?
cant resist my ego's urge to highlight that qualitatively we found the EXACT same results with the self-blinding microdose study (tinyurl.com/2as4n566) for ~1% of the costs of this study: very small transient effects that are mostly gone when blinding integrity is considered.
Overall, I feel these results are similarly lukewarm as other trials in healthy samples (e.g. tinyurl.com/yv2636rv & tinyurl.com/ym9rd29u) - best case scenario is a small effect. #microdose research needs to turn towards patient populations to see if there is more there.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I am trying to figure out the best approximate dose conversion between #magicmushrooms and #LSD for a study. Based on #science, 100µg LSD ~ 3.2g dried mushroom (see calc 👇). Is this reasonable based on your personal experiences?
The number I cite above is based on two papers. The first one tinyurl.com/muarwac6 says: "we estimate that 4 g of typically available dried mushrooms (P. cubensis) delivers the approximate psychoactive equivalent to 25 mg of psilocybin"
The other is tinyurl.com/mhrv4kpr: "these results suggest that 20 mg psilocybin is equivalent to 100 μg LSD, and 30 mg psilocybin is equivalent to 150 μg LSD, a consistency that was also noted elsewhere [46]. Thus, the dose equivalence of LSD to psilocybin is ~1:200."
Now that #psychedelic trial's lack of blinding is hot again, lets remind ourselves that for most treatments blinding doesn't even work as a concept, eg. mindfulness, psychotherapy, lifestyle interventions etc. Lack of blinding is the rule rather than the exception, a short 🧵
Blinding as a concept only works for pharmacological treatments, but there are many more interventions where 'blinding' does not even work conceptually (see 👆). Nobody is upset that blinding quality is not measured and blinding integrity is not maintained in an exercise trial.
The tension:
-psychedelics are pharmacological (lets set aside therapy for a sec), so blinding should work
-blinding obviously does not work due to strong subjective drug effects
Its an 'unblindable' pharma treatment (at least by conventional methods), which is strange at first
In the @COMPASSPathway trial the 25mg vs 1mg (=placebo) #Psilocybin difference is not only significant statistically, but also clinically. On the MADRS the 'minimal important difference' is ~3-6 points (tinyurl.com/5apzv8h8), the 25mg dose meets this criteria (more context 👇)
The 'minimal important difference' sounds like a low bar to cross, but actually most #antidepressants fail to do so relative to #placebo, tinyurl.com/yut6xyhd - thank you @PloederlM for your work on this, would love to hear your take on this trial!
Lack of blinding remains an issue, but the dose-response relationship should alleviate this concern. As I argued before, #psychedelics macrodose trials most likely will always lack blinding due to obvious drug effects, it is the nature of the intervention.
The recent #psilocybin vs. alcoholism trial used an active placebo (diphenhydramine). Despite this ~94% correctly guessed their treatment, showing that blinding didn't work. IMO this shows that active placebos likely wont solve the blinding issue of #psychedelic trials, a 🧵
Active placebos may have perceivable effects, but these wont confuse most patients, because psychedelics have very specific subjective effects. When a patient experiences drug effects, in most cases its easy to decipher whether its due to a psychedelic or some other drug.
Even if someone is unfamiliar with psychedelic effects going into a trial, modern ethical research standards require doctors to discuss likely effects with patients, making blind breaking that much easier
Their two conclusions:
-p1: "meta-analysis suggested that blinding was unsuccessful among participants and investigators."
-p2: "patients or assessors were unlikely to judge which treatment the patients were on."
Despite the importance of #blinding in medical research, most trials don't assess blinding integrity, partially because there is no method to adjust trial results for blinding integrity... until now! New preprint with implications for #microdose and #psychedelic research 🧪🧵👇
First, we define activated expectancy bias (AEB), which is an uneven distribution of expectancy effects between treatment arms due to patients recognizing their treatment allocation. AEB can be viewed as residual expectancy bias not eliminated by the trial’s blinding procedure.
The main idea behind AEB is that if treatment allocation can be deduced by participants, then, treatment expectancy can bias the outcomes in the same manner as it biases non-blinded trials, for example as in open-label trials.