Excellent paper from Google discussing the robustness of Deep Learning models when deployed in real domains. arxiv.org/abs/2011.03395
The issue is described as 'underspecification'. The analogy they make is linear equations with more unknowns than the number of equations. The excess freedom leads to differing behavior across networks trained on the same dataset.
This is one of the rare papers that has practical significance to the production deployment of deep learning. I've alluded to this problem previously with respect to physical simulations. medium.com/intuitionmachi…
I wrote, "The main argument against DL models is that they don’t represent any physics, although they seem to generate simulations that do look realistically like physics."
Conventional computational models are constrained to reflect the actual physics. Proper DL methods therefore also have to be constrained similarly in their dynamics.
Although we have seem some impressive progress in this area. It's difficult to do well in domains like NLP and EHR.
This is because our models of these domains are also underspecified. We do not know the constraints to aid in constraining our models. So in edge cases that are absent in our training set, we are unaware of their emergent behavior.
This problem however is not unique to DL models. They also exist in conventional computation models. That is why ensembles of models are always used to predict weather patterns. The Church-Turing hypothesis is simply unavoidable for complex systems.
The path forward for robust AI will always depend on good explanatory models that capture the relevant causality of the system being predicted. Advanced AI should not be driven naively by curve fitting, but rather by relevance realization.
That is, true intelligence is a multi-scale phenomena and the issue of judgement is critical to its effective deployment.
Wrote this up here: medium.com/intuitionmachi…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

16 Nov
What is the difference between causation and causality? The former is a consequence of a generative model and the latter is a consequence of a descriptive model.
Causation is the emergent partial ordering induced by computation. Theoretical, all the characteristics of computation such as universality and the halting problem are inherited by the concept of causation.
Causality however is a different thing. It is a process that approximates the causal behavior of complex processes. Approximates in the sense that it describes the process. This is different from simulating the process which has its own intrinsic limitations (Church-Turing).
Read 16 tweets
15 Nov
Analysis of QAnon by a game designer. Everyone should read! medium.com/curiouserinsti…
QAnon method is like the movie Inception on a mass scale. Planting seeds of misinformation so that its victims generate their understanding of alternative reality for themselves.
The author concludes that this isn't a movement that grew organically, but rather one that is orchestrated with big money.
Read 24 tweets
14 Nov
Jay McClelland on What's missing in Deep Learning crowdcast.io/e/learningsalo…
He argues against innate systematic generalization in humans and it is something that we acquire.
Thus he argues that to achieve systematic generalization we need to devise machines that learn how to do systematic generalization. That is, a meta-solution to the problem.
Read 21 tweets
14 Nov
Yesterday's Learning Salon with Gary Marcus. The last 30 minutes were excellent (after the guest left). The best conclusion: @blamlab AI is the subversive idea that cognitive psychology can be formalized.
crowdcast.io/e/learningsalo…
Important to realize that a description of a missing cognitive functionality does not have enough precision or leave enough hints on how this is implemented in the brain. Implementations in code do not imply how it is implemented in the brain.
Another distinction that is important that there is a disagreement on how to do research. The Deep Learning community has argued that we should not constrain ourselves with a-priori hypothesis that may be wrong. Let the learning system discover the algorithms.
Read 6 tweets
13 Nov
The more try to understand cognition, the more you realize how long the journey may be required to get human-like general intelligence.
Our frameworks of understanding cognition are getting better. However, one has to understand that cognition arises through emergence in complex adaptive systems. These systems are very difficult to set up and replicate.
To get an intuition of how large the gap truly is, one only needs to observe how awkward and non-organic in our present-day robots. Why can't they perform with the nimbleness of honey bees?
Read 5 tweets
13 Nov
The idealization of the ethnic peasantry as the one true national class is the generating condition that lead to genocides in Nazi Germany, Armenia, and Cambodia. It is fueled by the resentment of the elite as the root of their own misery.
We need to learn from history and ask why a country like Cambodia will put a quarter of its population to death only because they were experts in different crafts. en.wikipedia.org/wiki/Cambodian…
What collectively drives people to kill people on a mass scale? What makes people ignore their natural empathy for others? It is the collective delusion that the existence of another is the reason for one's misery.
Read 19 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!