Wonderful discussion with Paul Cisek at the Learning Salon. Paul proposes a refactoring of our taxonomy for understanding cognition. He argues that the structure should be driven by studying the history of evolution. crowdcast.io/e/learningsalo…
What I love about the Learning Salon that is hosted by @criticalneuro @neuro_data John Krakauer is that the hosts are all ready to tear apart the arguments of the speaker. Krakauer has an uncanny ability in conjuring up strong cases against the speaker.
I subscribe to Cisek's thesis in that to understand cognition, we should be informed by evolution. Cognition is a consequence of history (or the baggage) that lead us to our present state. Studying this information can lead to explanations of the peculiarities of human thinking.
Krakauer's strong argument against this is that it 'runs out of steam'. That is, it cannot explain the more complex cognitive mechanisms. I suspect the argument may relate 'virtualization' in that cognition is decoupled from substrate.
Functionalists will argue that they can discover the mechanisms of cognition independent of the unique path of evolution. This is related to convergent evolution where different paths lead to the same function. What is the issue with this stance?
No study is more anthropocentric than the study of the mind. Ironically, the mind is predominantly unconscious and below the surface of conscious introspection. Yet we have developed taxonomies that are based on arm-chair explorations.
At every level of biology leads to a different context that is solved in many ways. Our brains are a consequence of that billion-year history of solving problems that a majority of which we are unaware of. The study of the evolution of brains gives us insight on these problems.
In short, it informs us of the curriculum that evolution followed. However, the argument against this is that it does not reveal to us how it was done but rather only identifying the steps in the process.
But we already know from advances in deep learning that understanding why and how may be impenetrable. We predominantly work at the boundaries, timing just the branches and allowing the learning algorithms to grow the solution.
The problem though of learning algorithms that is (1) Goodhart's Law and (2) Deceptive objectives. The first implies that learning algos will seek the easiest solution to a problem and not the most general. 2nd implies that seeking immediate objects closes one out to solutions.
The evolutionary record reveals to us the stepping stone problems that were solved prior to achieving more advanced capabilities. It reveals the curriculum of biology. Evolution makes progress by solving immediate objectives and not long-range ones.
One may argue that studying vacuum tubes does not lead us to understand transistors. But, we should recognize that the invention process of evolution differs from that of humans. Biology is a differentiation process rather than an additive one.
In other words, the connection with the past is informative to how something works in the future. Think of understanding the processes in human bureaucracies. The warped logic can only be understood from the lens of how it evolved.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

8 Oct
Damn, this book is thick!! @coecke Image
I've observed that most books have very few diagrams. I really don't understand why authors think that it's easier to explain something without a diagram.
Perhaps there exists a lack of ability to express something in a diagram. This book has an unimaginable number of diagrams. I randomly opened the book and there were 7 diagrams between two pages.
Read 5 tweets
6 Oct
Brian Cantwell Smith lecture on philosophy and the meaning of computation explains why the language of philosophy just uses a different vocabulary from that of computer science.
In this lecture, he argues that 4 common definitions of computation are inadequate: (1) Symbol processing (2) Turing equivalence (3) Information Processing and (4) Digital.
His more abstract definition is that computation is the interplay of meaning and mechanism. It is the mechanization of an agent's intentionality.
Read 13 tweets
6 Oct
Found this by Steve Bynes that I think is worth reading lesswrong.com/posts/diruo47z…
7 guiding principles of the brain: (1) 2 subsystems (2) cortical uniformity (3) blank-slate neocortex (4) A neocortical algo (5) subcortex steers neocortex (6) neocortex is blacbox wrt subcortex (7) unknown subcortex algos.
To summarize, Steve Byrnes argues that the subcortex is underexplored, more complex than the neocortex and is critical to AI safety. I don't disagree.
Read 5 tweets
6 Oct
Is Philosophy just Psychology?
Here is George Lakoff explaining how they examined the work of philosophers and realized that each one took a subset of metaphors and took them literally.
But let us take this even further, metaphor is a tool for human brains. But what are brains other than computational systems. Here Brian-Cantwell-Smith explains the meaning of computation:
Read 11 tweets
5 Oct
Civilizations and governments exist to improve the welfare of everyone. Yet we have a civilization and a government that focuses on the few. This is obvious when we see spending for all the wrong reasons. ebaumsworld.com/videos/carl-sa…
Civilizations and bureaucracies have always been gamed by the cleverness of humans to gain individual advantages. The biggest deception is that this self-dealing is inevitable and those more cunning deserve to be at the top.
So rather than physical violence, we have instead social and political violence. We seem to separate them and are manipulated to think that the latter kind of violence is acceptable. Coercion over consensus is simply unacceptable.
Read 7 tweets
29 Sep
The classic explanation of Deep Learning networks is that each layer creates a different layer that is translated by a layer above it. A discrete translation from one continuous representation to another one.
The learning mechanism begins from the top layers and propagates errors downward, in the process modifying the parts of the translation that has the greatest effect on the error.
Unlike an system that is engineered, the modularity of each layer is not defined but rather learned in a way that one would otherwise label as haphazard. If bees could design translation systems, then they would do it like deep learning networks
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!