A good explanation of computational intractability and the notion of relevance realization. experiencemachines.wordpress.com/2019/11/13/fiv…. .
The above post has a follow up by @IrisVanRooij irisvanrooijcogsci.com/2020/01/01/sam… .
To summarize, biological brains do not approximate intractable problems. There's no such thing as being able to approximate intractable problems, this is a fiction that so many machine learning researchers fall into.
I'm not the only that points out the limits of approximation with respect to computational systems.
What can be approximated are the computations that are tractable. We are able to approximate intractable solutions because we constrain ourselves to subsets that do have tractable solutions. Our intuition is a bias to identify these subsets.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

18 Oct
"The brain is a computer" is a damn problematic metaphor. I prefer to say that "the brain is an intuition machine".
The term computer is conventionally understood as to be a digital computer. It's the kind that we program. It's the kind that is designed by minds and manufactured in assembly lines. It's the kind that can't repair itself. It's the kind without any autonomy.
It is a horrible metaphor. The brain is an intuition machine is a better metaphor. It's that kind that learns from experience. It is the kind that develops in an inside-out manner. It is the kind that creates itself. It is the kind that repairs itself. It is autonomous.
Read 42 tweets
16 Oct
"Shut up and calculate" is the affliction we have when we substitute symbols for understanding.
Humans are linguistic bodies. A huge part of our brains has been exapted (verb form of exaptation) for language. Thus it's conceivable that our innate capabilities for understanding have diminished in use.
Simon DeDeo wrote an insightful tweetstorm about explanations that appear intuitive.
Read 12 tweets
16 Oct
Biological cognition is always based on computation subject to time and resource constraints (i.e. fast and frugal). The case where parallel computation cannot be recreated by sequential computation is precisely in this context.
Said differently, smaller brains do not have the same capabilities as larger brains. So when we talk about the notion of cognition or agency for insects or even single-cell life (without brains), we speak of a kind of behavior that is grossly simpler.
However, the computation performed by living things is very different from the computation performed by computers. This is because agency leads to an intentional stance. Intention is what constrains computations towards goals.
Read 9 tweets
15 Oct
So I got myself the Quest 2. I'm not a heavy VR user. Last time I used VR was when the Vive was released. So I skipped one generation of VR headsets. This is the 3rd gen.
The original Vive had 1080×1200 per eye. Quest 2 has 1832×1920 per eye. So the viewing experience is obviously superior. Adoption of VR will all depend on resolution. Seeing the world in low-res is a turn-off.
The 1st gen VR required some setup and was tethered to a computer. Quest 2 in contrast is untethered, the compute is on the headset itself. It's certainly much more convenient to use now that we are spoiled by our mobile devices.
Read 8 tweets
15 Oct
All that is significant emerges from the coupling of slow with fast processes.
The slow side implies a compression or memory process relative to the fast side. That said, there is always change, but the relative difference in speed of the two processes leads to an invariance. It is that emergent invariance that is significant.
The dual of invariance is tension. There are processes that lead to convergence (thus remaining the same) and there are features that are constantly driven towards diversity. All evolution requires push-pull dynamics.
Read 6 tweets
14 Oct
Is there a limit in what Deep Learning can't eventually do? matthewtancik.com/nerf
Deep Learning used to explore the cosmos:
Deep Learning used for simulations at the quantum level:
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!