Profile picture
Carsen Stringer @computingnature
, 25 tweets, 6 min read Read on Twitter
Thread: A picture is worth a thousand words, and your brain needs billions of neurons to process it. Why do we need so many neurons? To find out, we recorded thousands of them in mouse visual cortex. Here’s some data, and a link to the paper:
biorxiv.org/content/early/…
2. One reason to have so many neurons may be that they each have different jobs:
Neuron A recognizes the pointedness of a fox’s ears,
Neuron B recognizes the color of the fox’s fur.
Neuron C recognizes a fox nose,
etc
3. When enough of these neurons activate, the brain as a whole can recognize a fox.
4. What if some neurons “fall asleep” on the job and don’t respond to the image? This actually happens very often, and yet the brain is remarkably robust to these failures.
5. Even if 90% of the neurons don’t do their job, we can still recognize the fox. Even if we randomly change 90% of the pixels, we can still recognize the fox. The brain is robust to a lot of manipulations like that.
6. Artificial neural networks also use millions of neurons to recognize images.
7. Unlike brains, machines are not so robust to small aberrations. Here is our fox and next to it the same fox very slightly modified and now the machine thinks it’s a puffer fish!
8. These are called “adversarial images”, because we devised them to fool the machine. How does the brain protect against these perturbations and others?
9. One protection could be to make many slightly different copies of the neurons that represent foxes. Even if some neurons fall asleep on the job, their copies might still activate.
10. However, if the brain used so many neurons for every single image, we would quickly run out of neurons!
11. This results in an evolutionary pressure: it’s good to have many neurons do very different jobs so we can recognize lots of objects in images, but it’s also good if they share some responsibilities, so they can pick up the slack when necessary.
12. We found evidence for this by investigating the main dimensions of variation in the responses of 10,000 neurons. Below, each column is one neuron’s responses to several of our images.
13. The largest two dimensions were distributed broadly across all neurons, as you see below. Any neuron could contribute to these and pick up the slack if the other neurons did not respond.
14. The next 8 dimensions each were smaller and distributed more sparsely across neurons. If a neuron was asleep, it was still likely a few others could represent these dimensions in its place.
15. The next 30 dimensions revealed ever more intricate structure...
16. And so did the next 160 dimensions...
17. And so on, this kept on going, with the N-th dimension being about N times smaller than the biggest dimension.
18. This distribution of activity is called a “power-law”.
19. However, this was not just any power-law, it had a special exponent of approx 1. We did some math and showed that a power-law with this exponent must be borderline fractal.
20. A fractal is a mathematical object that has structure at many different spatial scales, like the Mandelbrot set below:
21. This Inceptionism movie is also a kind of fractal:
22. The neural activity was so close to being a fractal, and just barely avoided it because it’s exponent was 1.04, not 1 or smaller.
23. An exponent of 1.04 is the sweet spot: as high-dimensional as possible without being a fractal.
24. Not being a fractal allows neural responses to be continuous and smooth, which are the minimal protections neurons need so that we don’t confuse a fox with a puffer fish!
All the neural data is available here: figshare.com/articles/Recor…

And the code is here: github.com/MouseLand/stri…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Carsen Stringer
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!