7 guiding principles of the brain: (1) 2 subsystems (2) cortical uniformity (3) blank-slate neocortex (4) A neocortical algo (5) subcortex steers neocortex (6) neocortex is blacbox wrt subcortex (7) unknown subcortex algos.
To summarize, Steve Byrnes argues that the subcortex is underexplored, more complex than the neocortex and is critical to AI safety. I don't disagree.
What constitutes the neocortex and subcortex is still an open question. Also, what is hard-wired and wired is also open. I agree though that the learning algorithms are innate and that neocortex is 'rewired' to different degrees in humans and other animals.
It makes sense that one part of the brain is a black-box to another part that drives its behavior. I would take this idea to smaller granularities where many of the sub-part learns how to coordinate with other parts.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Brian Cantwell Smith lecture on philosophy and the meaning of computation explains why the language of philosophy just uses a different vocabulary from that of computer science.
In this lecture, he argues that 4 common definitions of computation are inadequate: (1) Symbol processing (2) Turing equivalence (3) Information Processing and (4) Digital.
His more abstract definition is that computation is the interplay of meaning and mechanism. It is the mechanization of an agent's intentionality.
Here is George Lakoff explaining how they examined the work of philosophers and realized that each one took a subset of metaphors and took them literally.
But let us take this even further, metaphor is a tool for human brains. But what are brains other than computational systems. Here Brian-Cantwell-Smith explains the meaning of computation:
Civilizations and governments exist to improve the welfare of everyone. Yet we have a civilization and a government that focuses on the few. This is obvious when we see spending for all the wrong reasons. ebaumsworld.com/videos/carl-sa…
Civilizations and bureaucracies have always been gamed by the cleverness of humans to gain individual advantages. The biggest deception is that this self-dealing is inevitable and those more cunning deserve to be at the top.
So rather than physical violence, we have instead social and political violence. We seem to separate them and are manipulated to think that the latter kind of violence is acceptable. Coercion over consensus is simply unacceptable.
The classic explanation of Deep Learning networks is that each layer creates a different layer that is translated by a layer above it. A discrete translation from one continuous representation to another one.
The learning mechanism begins from the top layers and propagates errors downward, in the process modifying the parts of the translation that has the greatest effect on the error.
Unlike an system that is engineered, the modularity of each layer is not defined but rather learned in a way that one would otherwise label as haphazard. If bees could design translation systems, then they would do it like deep learning networks
I confess I don't understand philosophy. I don't understand the language nor do I understand the train of thinking. I suspect that my comfort in understanding how the mind works relate to my inability to understand philosophy!
I have an intuition for Wittgenstein, but I can't follow most philosopher arguments. It seems that they are following mental scripts that I have not studied. Different philosophers have different mental scripts and it seems the task is to stitch together these scripts.
The validity of a script is based on the stature of the philosopher. So it's kind of like a franchise of comic books with different narratives and the task is to come up with a universe story where everything fits.