There are interesting prospects for engineering applications, but let's not forget that spiking neurons are precise models of biological neurons.
In a paper accepted at #NeurIPS2021 we use back-prop in spiking RNNs to fit cortical data 1/8
Given that the biological network is
(1) strongly recurrent and (2) some neurons are not recorded,
this is a profound statistical problem for which the best existing formalizations are still based on GLMs with the max. likelihood (MLE, @jpillowtime). 2/8
A limitation of MLE training is to be conditioned on recorded data only. So when one simulates the fitted network, it explodes as soon as it's different from the data.
This is why back-prop in spiking RNNs is useful: one can now train the model using simulated spikes! 3/8
Great! but what is the correct loss to push the model towards the true biological network?
A simple thing to start with is to minimize the distance between recorded and simulated activity statistics (e.g. firing rate or PSTH). We call that a sample-and-measure loss function 4/8
A sample-and-measure loss is the proper way to implement the prior that "my model will generate realistic PSTH" (or any other statistics).
We studied its statistical consistency and found a usual property of "Bayesian priors": when data is ∞, it does not bias the solution. 5/8
@shuqiwang6 did lots of simulations and we saw that it solves the notorious stability issues of GLM + MLE.
So we see the spiking sample-and-measure + spiking back-prop combo as a simple and stable generative model of neural activity. 6/8
As a tentative application we checked if that could be useful to reconstruct the connectivity of the recorded network.
For this, GLM + MLE is ok when the network is fully observed but it's very unstable when some neurons are not recorded (like in any cortical recording). 7/8
But don't worry, modeling hidden activity is much simpler with sample-and-measure 🦸♀️ !!
Although, it's still difficult to recover every connection strength perfectly, sample-and-measure finds truthful connectivity patterns -- even with ~85% of hidden activity. 8/8
In particular to @shuqiwang6 (co-first) who deserves lots of credits for this work! If she applied in your PhD program do not miss out on her. She is truly exceptional 🔢💻🧠
Thanks also to @RomainBrette and @PierreYger who supervised my master thesis 8 years ago, I often hear their voices telling me that spiking is a model of biology.
Story of a complicated love-hate relationship:
biology ➜ machine learning ➜ biology ➜ machine learning ➜ bio...