How can we account for the diverse profile of subjective and therapeutic effects which psychedelics seem to induce? In a new preprint (link below), we present theoretical and empirical evidence which point to the need to look beyond just the 5-HT2a receptor. A thread 🧵...
1/12
Classic psychedelics all have significant affinity for both the 5-HT2a *and* 5-HT1a receptors. Although 5-HT2a is responsible for the main psychedelic effects, 5-HT1a also plays a significant modulating role. We set out to computationally characterize both of these roles.
2/12
To do so, we adopt the predictive processing framework and an energy-based model in which neural responses are the result of an optimization process on an energy landscape. During inference 'energy' is minimized, and during learning the 'predictive error' is minimized.
3/12
Within this framework, many mental disorders (depression, OCD, etc) are understood as pathologies of optimization. Overly-precise and maladaptive priors manifest as local minima with steep gradients within the energy landscape, a phenomenon sometimes called canalization.
4/12
We model 5-HT2a as injecting noise into the energy landscape, and 5-HT1a as smoothing it. The former results in acute overfitting during inference, while the latter in acute underfitting. Since many psychedelic (PSI, LSD, DMT) are mixed agonists, both happen simultaneously.
5/12
The overfitting of 5-HT2a is a special form of transient belief strengthening, one which has the typical neural signature of increased cortical entropy. The underfitting of 5-HT1a is a form of acute belief relaxation, and alone would only weakly increase cortical entropy.
6/12
In our model, we find that 5-HT2a is responsible for long-term therapeutic effects, but at the cost of short-term acute tolerability. In contrast, 5-HT1a is acutely therapeutic and tolerable, but provides little long-term efficacy. Things get interesting when you mix both.
7/12
In our model mixed agonists have greater long-term efficacy than 5-HT2a alone, while also being significantly more acutely tolerable. We find that if you want to optimize for both long-term and acute therapeutic effects an optimal agonism bias is towards 5-HT1a over 5-HT2a.
8/12
5-MeO-DMT, a highly-biased 5-HT1a agonist, has received clinical attention for its potential to treat depression. Likewise for the co-administering of MDMA and LSD. There is a whole space of biased 5-HT1a agonists such as 5-MeO-MIPT which may also be worth exploring.
9/12
Our work points to the importance of non-5HT2a receptor targets in the efficacy and tolerability of psychedelic therapy. Perhaps not surprisingly, the tryptamines have this profile, and the clinical success of psilocybin may be attributable to its unique mixed profile.
10/12
I am truly grateful to my wonderful collaborators @VeronicaChelu, @lgraesser3, and @adamsafron who worked to make this project possible. I also want to thank @algekalipso for providing consultation on the phenomenology of 5-MeO-DMT in the early formulation of this work.
11/12
The preprint contains many more details and results. I encourage folks to check it out and let us know their thoughts. Our model makes a number of untested predictions, and we hope that it can encourage valuable new lines of inquiry going forward.
Happy to share a new pre-print: arxiv.org/abs/2204.05133 from @kaixhin, @shuxnys, @kanair, and I. In it, we try to engage with the neuroscience and philosophy of conscious function, in the hope of providing a positive vision of an AI research program inspired by that work. A 🧵...
This work was a long time coming, with ideas first percolating back in last summer when I first joined Araya. We were happy to see the lively discussion on twitter a couple months ago around the topic, and glad we can finally share our perspective...
"Consciousness" is a complicated hydra of a topic. Instead of focusing on phenomenal consciousness with all of its metaphysical and ethical implications, we focus instead on conscious access and conscious function (as Daniel Dennett put it “what is consciousness for?”)...
Inspired by the impressive results of OpenAI's GLIDE arxiv.org/abs/2112.10741, I decided to dive into denoising diffusion models last week. I spent a couple days implementing the original model from Ho et al., 2020 (link below), and learned a lot in the process. 🧵 1/5
The idea behind these models is very clever, and they can be implemented straightforwardly as well. Put simply, you just train a model to predict the noise added to an image. Once trained, you iteratively use that model to go from random noise a generated sample. 2/5
I open-sourced my code which you can check out here: github.com/awjuliani/pyto…. It is written in PyTorch/Lightning and works with MNIST and CIFAR datasets. As long as you have a GPU, you can train the generative model for these datasets in a matter of minutes. 3/5