Existing models of neurons or even single cells are woefully inadequate to simulate what's going on in the brain. Standard models are based on toy models that are conveniently easy to simulate. Scientific research has a bias toward the tools it has at its disposal.
However, we also should not underestimate the complexity that simple components can generate. Conversely, we can't ignore the consistency of behavior that a collection of complex parts generate.
The truth about general intelligence like the brain lies somewhere in between. Humans are complex beings, yet there exists a consistency of how collections of humans behave. Civilization would not be possible if not for common behavior that leads to emergent behavior.
So when we analyze a single neuron or cell, we should extract the common behavior shared by collections of neurons and see how that leads to predictable ensemble behavior.
A big problem is that too many people fixate on where to draw the line as to what is the basic building block. Humans have been doing this for centuries ever since Democritus conceived of the atom.
What is more important are the higher-level abstractions that describe how parts interact to create a whole. Deep Learning research has shown that the backpropagation mechanism is universally useful as long as we can compute the differential on each component in the network.
Indeed it is the case that researchers have discovered new network components that are more effective than others. But it is also the case that we find simpler components that perform just as well as more complex ones.
The development of computers has shown that there are simple universal components (NAND, NOR) that serve as the building block of all kinds of computational devices. C.S. Peirce was perhaps one of the first to recognize these universal components.
The problem that the Von Neumann architecture solved was how to realize human-designed execution plans into a system of NAND gates without having to rewire the entire system.
The Von Neumann architecture consists of a memory that would hold a finite instruction set and a processor that would interpret these instructions. Conditional evaluation, branching, and operations on memory and a program counter were all that were needed to achieve universality.
Thus was born Computer Science that built up higher and higher abstractions so humans could create more complex programs using these ensembles of NAND gates.
But what do we make of the brain? At what level of abstraction does it construct itself? It takes a very smart human to invent an Arithmetic Logic Unit (ALU) that sits in a CPU. How do brains bootstrap themselves?
There are always non-ending debates as to the atomic parts of brains. Yet everyone beats around the bush, unable to contemplate an explanation of how the brain bootstraps itself. Absent this explanation, we may never discover general intelligence unless...
We invent a tool that also bootstraps itself!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

13 Jun
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
Keneth Stanley @kenneth0stanley actually has a much better construction which he presents in this paper. dl.acm.org/doi/abs/10.114…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Read 10 tweets
12 Jun
Interview with Walid Saba who appears to have a core ontology with just 2,000 types.
This is his layering. What you will note is that the middle layers are 'affordances' of the nouns in their arguments. Image
Where he has some primitive types. @markburgess_osl has a similar set. Image
Read 5 tweets
12 Jun
Tim Scarfe @ecsquendor with an interview with the mad man @coecke
Damn, had to slow down from 2x to 1.5x to even understand @coecke
Haha, I agree, I also don't understand the difference between strong or weak emergence! And I'm writing a book with emergence in it's title! gum.co/empathy
Read 7 tweets
11 Jun
Stumbled upon this new videocast with @GaryMarcus @luislamb @ykilcher . I must say that introduction was quite good.
I'm a bit perplexed about the achievement of Cyc. Have symbolic systems ever achieved a level of robustness to be of real utility? Not sure why this panel is all praise about it.
Yanic asked a good question that you can't just throw around buzzwords like 'abstractions' and 'semantics' without proposing an approach on how to achieve it. It's not clear how you get from symbolic manipulation into common sense.
Read 13 tweets
11 Jun
Just as quantum mechanics is unintuitive to humans, it is likely that parallel distributed computation is also unintuitive. NAND gates are not intuitive. SK combinators are not intuitive. The building blocks of cognition are likely unintuitive as well.
Human minds are simply incapable of explaining how human minds work. At best we can explain the emergent properties, but not the underlying mechanisms.
Of course, we must have a good metaphor to partially explain human cognition. We need them so that we can formulate explanations for methods of teaching, decision-making, and idea generation. We cannot be blind to human cognitive nature.
Read 15 tweets
11 Jun
It must be difficult being a neuroscientist. It's like being an alchemist before the periodic table was discovered.
Just like alchemists at the time of Newton, the tools and models to explore their domain are completely absent. You cannot make progress if you have no capability of observing and interpreting what's going on.
To be fair, neuroscience isn't about understanding cognition. It's about understanding the physical nature of the brain. Cognition is a virtual thing. The difference between hardware and software.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(