Why spiking neural networks?

There are interesting prospects for engineering applications, but let's not forget that spiking neurons are precise models of biological neurons.

In a paper accepted at #NeurIPS2021 we use back-prop in spiking RNNs to fit cortical data 1/8
Given that the biological network is

(1) strongly recurrent and
(2) some neurons are not recorded,

this is a profound statistical problem for which the best existing formalizations are still based on GLMs with the max. likelihood (MLE, @jpillowtime). 2/8
A limitation of MLE training is to be conditioned on recorded data only. So when one simulates the fitted network, it explodes as soon as it's different from the data.

This is why back-prop in spiking RNNs is useful: one can now train the model using simulated spikes! 3/8
Great! but what is the correct loss to push the model towards the true biological network?

A simple thing to start with is to minimize the distance between recorded and simulated activity statistics (e.g. firing rate or PSTH). We call that a sample-and-measure loss function 4/8
A sample-and-measure loss is the proper way to implement the prior that "my model will generate realistic PSTH" (or any other statistics).

We studied its statistical consistency and found a usual property of "Bayesian priors": when data is ∞, it does not bias the solution. 5/8
@shuqiwang6 did lots of simulations and we saw that it solves the notorious stability issues of GLM + MLE.

So we see the spiking sample-and-measure + spiking back-prop combo as a simple and stable generative model of neural activity. 6/8
As a tentative application we checked if that could be useful to reconstruct the connectivity of the recorded network.

For this, GLM + MLE is ok when the network is fully observed but it's very unstable when some neurons are not recorded (like in any cortical recording). 7/8
But don't worry, modeling hidden activity is much simpler with sample-and-measure 🦸‍♀️ !!

Although, it's still difficult to recover every connection strength perfectly, sample-and-measure finds truthful connectivity patterns -- even with ~85% of hidden activity. 8/8
Many thanks to the co-authors @shuqiwang6, @modirshanechi, W Gerstner and J Brea from @compneuro_epfl

In particular to @shuqiwang6 (co-first) who deserves lots of credits for this work! If she applied in your PhD program do not miss out on her. She is truly exceptional 🔢💻🧠
Thanks also to @RomainBrette and @PierreYger who supervised my master thesis 8 years ago, I often hear their voices telling me that spiking is a model of biology.

Story of a complicated love-hate relationship:
biology ➜ machine learning ➜ biology ➜ machine learning ➜ bio...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Guillaume Bellec

Guillaume Bellec Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!

:(