Science is being eaten up by deep learning. A fact that nobody can ignore.
But what's unfortunate is that nobody understands deep learning well enough to set up the experiments and interpretations correctly! It's damn good at making predictions but damn terrible at explaining anything!!
The uncertainty principle of deep learning is that the more generalized one's network, the least likely it's interpretable! medium.com/intuitionmachi…
That's because this is the nature of models that curve fit their data. Anyone who doesn't understand this has their head in the sand. medium.com/intuitionmachi…
Models that are interpretable are causal models. But causal models are descriptive models, not generative models. The origin of these models arises through abduction and not induction. medium.com/intuitionmachi…
The difference between a map and its territory is that a map changes when its territory changes, not the other way around. There are certain territories that are so complex that we have trouble creating a map.
But machines do not need maps. It's like Amazon warehouse bots that don't need orderly place items in categories. They have limitless memories and don't need the mental aids (i.e. categorization) that humans need.
There are two kinds of explanations, the intuitive kinds that give us the feeling of understanding. An example of this is the analogy that an atom is like a solar system. Where an electron 'orbits' around a nucleus.
Then there are explanations with reach. These are the kinds based on first principles and allow one to derive new consequences without ever requiring a real-world simulation. The conservation laws of physics are like this, they aren't intuitive but are useful explanations.
There are meta-patterns throughout nature that recur at different levels of complexity. Curve fitting algorithms are blind to these patterns. This is because these meta-patterns are baked into the architecture of the algorithm. It's not introspectable but a matter of habit.
The habits that are useful recur at different levels of complexity. Biology continues to reuse the same algorithms at multiple levels of complexity. The stuff that single-cells do to avoid predators is an example of an algorithm that's reinvented in multicellular creatures.
Algorithms are rediscovered all the time at different layers. The reason why higher levels can't just reuse lower-level algorithms is because they are composed of different stuff and interact differently.
The fallacy that most people have in their mind is that biological systems are composed of simpler stuff. So we are in constant surprise when the more we peer into single cells and neurons, the less we understand.
Physics had the same surprise decades ago. The more they probed into the smallest of particles, the more complex their mathematics became. Most don't understand quantum mechanics, but it gets more complex when you investigate the standard model, supersymmetry or string theory.
The same thing is happening in biology. The more they try to dissect the parts of a single cell, the more they realize how little we actually understand. The complexity is enormous for something that can recreate an entire organism.
Our reductionist bias assumes that stuff can be broken up into simpler parts. But this agenda has hit a brick wall in biology and neuroscience. The simple parts are even more complex than we had previously imagined.
So we have complex parts that interact with other complex parts leading to emergent behavior that is also complex. The assumption of simple parts leading to complexity addresses only a narrow kind of complexity.
Not only do we have complex parts leading to emergent complex behavior. We have these systems folding back into themselves and generating even greater complexity. To add insult to injury, there's no master designer! These systems design themselves in a manner absent a mind.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

15 Sep
What I find very weird is that people are surprised with the notion that someone taught himself a skill. As if a teacher was absolutely necessary to learn anything. As if you can't learn anything by reading and experimenting by yourself.
"Wow, he's self-taught, he must be extremely gifted!" Are people, in general, incapable of learning anything without a teacher?
But to be perfectly fair, I'm in awe with kids who self teach themselves how to play music. I don't think I have the passion to do that! So I suspect this self-taught thing has everything to do with passion and talent.
Read 13 tweets
15 Sep
I'm coming to the realization that the GOP isn't a conservative party but rather an incoherent group of disenfranchised parties. Any group that has trouble pushing its agenda sees an opening by joining the GOP. It's also a business model to milk the disenfranchised.
Trump perhaps saw this so he reached out to any and every fringe group for their support. Any group, no matter how abhorrent their views are welcome in the GOP. But how do they handle conflicts between parties in the group?
They actually don't need to be because incoherence is the mode of operation. Trump has consistently been logically inconsistent. It is the same for parties within the GOP. They band together not because of commonality but rather because of shared disenfranchisement.
Read 10 tweets
14 Sep
Sometimes you just never know who reads your blogs/tweets. One of the most innovative developers I followed was James Strachan @jstrachan . I was in complete surprise when the one time we met that he said he read my blog that was about software engineering.
In my former life, I used to really enjoy writing about software development. Things like extreme programming, agile development, refactoring etc. So I actually know something about this stuff and am quite opinionated about them. Of course, it's different when stuff is new.
The commonality however with software development and artificial intelligence is that both areas deal with extreme complexity. The agile stuff that was invented two decades ago was motivated by the need to control the complexity.
Read 14 tweets
13 Sep
I lived in Manhattan in the last half of the 1990s and left in 2001 before the twin towers fell. So downtown NYC seems early strange for me without seeing the towers. But where I hear the slogans of 911, "Never Forget" I am unable to understand what that even means.
Remembering history should mean that you don't repeat the mistakes of the past. But has the nation learned its mistakes? Did we go into war with false pretenses? Did we overextend our presence in Afghanistan?
What happened after 911 was a cascade of more mistakes compounded one after another. In my opinion, the swift victory against the Taliban via the CIA was one of the few things done right. But it was downhill since then.
Read 12 tweets
6 Sep
My totally wild prediction is that deep learning will save the planet by making sustainable nuclear fusion possible. (saved here for posterity)
My second but less wild prediction is that deep learning will make quantum computing useful for real-world problems.
The only viable solution to avoiding the great extinction filter is to innovate ourselves around the filter. noahpinion.substack.com/p/people-are-r…
Read 5 tweets
5 Sep
Neuroscientists don't understand cognition, rather they understand how parts of the biological brain function. These are subjects that only partially overlap.
Is there any siloed field that can claim an understanding of general intelligence? I highly doubt it. It's an interdisciplinary problem where most of its practitioners are without an academic home.
Do deep learning practitioners understand general cognition? I doubt it. They may partially understand their connectionist architectures but these are just a sliver of capabilities of what's available to a general intelligence.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(