, 15 tweets, 7 min read Read on Twitter
A story of a Cortical Neuron as a Deep Artificial Neural Net:

1) Neurons in the brain are bombarded with massive synaptic input distributed across a large tree like structure - its dendritic tree.
During this bombardment, the tree goes wild

preprint: biorxiv.org/content/10.110…
2) These beautiful electricity waves are the result of many different ion channels opening and closing, and electric current flowing IN and OUT and ALONG the neuron.

This is complex, a lot of things are going on, and the question arises - how can we understand this complexity?
3) The approach we take in the paper is the attempt to compress all of this complexity inside as-small-as-possible deep artificial neural network.

We simulate a cell with all of it's complexity, and attempt to fit a DNN to the neuron's input-output transformation
4) We successfully manage to compress the full complexity of a neuron, that is usually described by more than 10,000 of coupled and non linear differential equations, to a smaller, but still very large, deep network.

What biological mechanism is responsible for this complexity?
5) The first candidate that comes to mind is the NMDA ion channel that is present in all synapses. So we remove NMDA ion channels and repeat the experiment keeping only AMPA synapses

Turns out, now we only need a very small artificial net to mimic the input-output transformation
6) So it turns out that most of the processing complexity of a single neuron is the result of two specific biological mechanisms - the distributed nature of the dendritic tree coupled with the NMDA ion channel.
Take away one of those things - and a neuron turns to a simple device
7) One additional advantage deep neural netowrks have compared to thousands of complicated differential equations, is the ability to visualize their inner workings.

The simplest method is to look at the first layer weights of the neural network:
8) Here depicted are weights for one of the artificial units in first layer of the large DNN that mimics a neuron with the full complexity:

One can see the spatio-temporal structure of synaptic integration:
The basal and oblique trees integrate predominantly recent inputs
9) Here, a different first layer unit, the apical tree appears to pay attention to what happened on it for many more milliseconds than the basal and oblique trees that we saw in previous unit

(BTW, the blue traces are inhibition, the red traces are excitation)
10) if we look at the first layer units of the small DNN that fitted the neuron with AMPA only synapses, then it appears that the units don't really pay attention at all to what happens at the apical tree or any distal locations at basal and oblique trees.
11) it is a little bit hard to see the details of what's going on in those weight plots because there are so many synapses. So Let's focus on a single dendritic branch and zoom in on it.

For a single branch with NMDA, it's possible to mimic it's behavior with only 4 hidden units
12) and here are it's spatio-temporal patterns of integration:
I'll verbally describe those filters from top to bottom as questions that the neuron is "asking" the input in order to determine its output

(Note: no inhibition here, only excitation. time window extent is 100ms)
13)
unit 1: was there very recent excitation that was proximal to soma?
unit 2: was there very recent excitation that was distal to soma?
unit 3: was there a quick distal to proximal pattern of excitation?
unit 4: was there a slow distal to proximal pattern of excitation?
14) many more details are in the preprint on bioRxiv: biorxiv.org/content/10.110…
@bioRxiv @biorxiv_neursci
15) huge thanks to my PhD supervisors @Segev_Lab and @mikilon and also to all my lab mates @TMoldwin @Oren_Amsalem @GuyEyal @MichaelDoronII @gialdetti and also to everyone else that listened to me talked on and on of these stuff in the past! :-)
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to David Beniaguev
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!