, 15 tweets, 8 min read
My Authors
Read all threads
New: "Training deep neural density estimators to identify mechanistic models of neural dynamics” biorxiv.org/content/10.110… @ppjgoncalves @janmatthis @deismic_ Nonnenmacher @kaandocal Bassetto @chc1987 @Bill_P_ @SaraAnnHaddad @TPVogels @dvdgbg. Our biggest project so far! Thread:
Models in neuro are ideally built on our assumptions about underlying mechanisms, often using classical theories (Hodgkin Huxley, E/I Balance, ..).
Catch is: Getting models and data to agree can be tricky, and people spend time tuning parameters by hand or with heuristics. (2/n)
Wouldn't it be great to have inference tools that work for these models? Then we could build the models we want, AND do statistical inference to explore the space of parameters that captures data. This is what we did here, using shiny ML tools (simulation-based inference). (3/n)
Basic idea: Generate lots of synthetic data from model simulations with random params. Then train a neural net on this data, and if you do it right [insert: lots of technical details] it learns to do inference. We call this Sequential Neural Posterior Estimation (SNPE). (4/n)
This only requires a way to simulate data, not access to the internals of the model. So, we can also use it on messy, non-differentiable, custom-coded models, and we can use it to find models that target summary statistics ('give me all networks with correlation 0.1') (5/n)
We run this on multiple examples:
Exhibit 1: Ion-channel models (with @TPVogels et al). The approach finds the posterior (read: space of params that work) (6/n)
Outputs of High-posterior (i.e. 'good') models (purple) look like the input-data (green), output from low-posterior ('bad') models (pink) do not: (7/n)
Cool, useful aside. Yes, Bayesian inference is slow, as are simulations. But here we can amortize inference, i.e. we 'store' it in the neural net. On new data, we can just use the network again, which is lightning-fast. One could do full Bayesian inference in real time! (8/n)
Exhibit 2: Hodgkin Huxley models, both on simulated data and recordings from @AllenInstitute. Here, we illustrate how different features of the data constrain different parameters ("If I only give you the spike-count not timing, what can you say about the model?") (9/n)
Exhibit 3, the Grand finale: The famous stomatogastric ganglion, known to generate the pyloric rhythm from vastly different parameters. We can use SNPE to find the full, 31-dim space of parameters (not shown...) which achieves a given, experimentally observed rhythm. (10/n)
We can then use the differentiable posterior to numerically look for high-probability paths, and use that to identify trajectories in model-space under which the network output is (almost) invariant, or, conversely, search for perturbations to which it is most sensitive (11/n).
But, seeing broad, weakly correlated posteriors can be misleading: In high-dims, these are averages over many parameter-settings, each of which might be finely balanced. To find whether there might be local dependencies btw/ parameters, we look at `conditional' posteriors. (12/n)
We find strong, structured correlations in the conditional posterior, i.e. need for fine-tuning, which can be used to predict potential compensation mechanisms. For some of these, experimental data is available, which turns out to be (qualitatively) consistent with them! (13/n)
Sean Bittner + John Cunningham have been working on a broadly similar goal biorxiv.org/content/10.110…, but with different technical approach and applications. Science should not be a race, so we decided to post the preprints at exactly the same time. (14/n)
Technical details, related work, discussion of limitations (its a tool not magic) in paper.
Code mackelab.org/delfi, tutorials still being brushed up.
Many thanks to great co-authors+many others!
@TU_Muenchen @caesarbonn @NNCN_Germany @dfg_public biorxiv.org/content/10.110… END
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Jakob Macke

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!