Carlos E. Perez Profile picture
Sep 23, 2020 19 tweets 3 min read Read on X
Amdahl's Law in parallel computing says that you can only have sub-linear speedup when converting a sequential algorithm to a parallel one. But, why does the massively parallel biological brain do perform so many operation with so little power?
It is universally true that if you convert a sequential algorithm into a parallel algorithm, the number of operations increases for the same problem. So how does a parallel brain do more with much less?
One way to reduce work is to not do any work at all! A brain does what it does because it avoids doing any work if it can. Call this the "lazy brain hypothesis".
Most of what the brain performs is based on 'amortized inference'. In other words, behavioral shortcuts (or heuristics) that it has learned through experience and billions of years of evolutionary fine tuning.
But what is the algorithm such it really doesn't do as little work at all? The lazy brain employs a lot of contextual information to make assumptions on what it's actually perceiving. In fact, it's isn't even bother to perceive. It just hallucinates what it sees.
It only fires up extra processing when it notices a discrepanancy in its expectations. We call this a surprise (which incidentally is similar to being horrified). It's a wakeup call for our consciousness to be engaged in more work.
So when it is in surprise, it is executing more threads to quickly discover knowledge to compensate for the discrepancy. But the first one that gets a match implies that all other threads are shutting down.
In short, the parallel process does not do all the work that a comparable sequential process does. The reason why this laziness is acceptable is that it only needs a good enough result and not the best result.
The massively parallel architecture of the brain allows for a massive number of threads to be executed, but most of these threads are terminated very early. But again, it is a mistake to think that all the threads are activated.
Only the threads that are related to the current context. Furthermore, all threads are automatically terminated withing a fixed period. But the brain still maintains longer ranged contexts because there are neurons that just happen to work at a very slow pace.
So we can think of a brain having parallel processes running at different speeds, all providing context to one another. All basically not doing much for most of the time.
But what does 'not doing much' actually mean? It means that the brain operates on its default mode with as little energy as possible. Anything outside the normal requires energy.
What's interesting is that our attention is controlled in the same way as motor actions (via the Basal Ganglia). Our attention navigates or feels its way just like your fingers feel its way while examining a fabric.
The brain does the laziest thing with attention, by inhibiting all information leading to the cortex by performing the filtering at the thalamus. So our cortex never perceives information that it is attending to. It simply isn't there to process.
So when we are focused in thought, we are always on a very narrow path of perception. Attention seems to counter the emersion in the world with all one's senses. To maximize our perception we dilute our focus. You can't attend to everything, at best you attend to nothing!
To engage wholely in this world, you avoid inhibiting your senses and that implies not attending to anything. It is in the same state as if you were in play.
However, when we are performing reason (i.e. system 2) our attention focuses on our thoughts. We are generating reason for our actions. We don't reason and then act, that's not the lazy way. We act, then we reason if we have to.
As we explore this lazy brain hypothesis, we begin to realize that our cognitive behavior works in a way that is the opposite of how we commonly think of it.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

Jan 5
Extending QPT into a Theory of Becoming. It's hot off the presses, so I'm still validating this. But, it's quite wild though! Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 8 tweets
Jan 3
The space of possible forms of consciousness (why confine ourselves to an anthropocentric version of consciousness)? Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 8 tweets
Jan 2
Artificial Fluency - A New Metaphor to View Intelligence Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 7 tweets
Jan 2
Jungian Psychology and Quaternion Process Theory of Consciousness Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 5 tweets
Jan 2
The Architecture of Reason (in 23 slides) Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 8 tweets
Jan 2
The architecture of inference Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(