Existing models of neurons or even single cells are woefully inadequate to simulate what's going on in the brain. Standard models are based on toy models that are conveniently easy to simulate. Scientific research has a bias toward the tools it has at its disposal.
However, we also should not underestimate the complexity that simple components can generate. Conversely, we can't ignore the consistency of behavior that a collection of complex parts generate.
The truth about general intelligence like the brain lies somewhere in between. Humans are complex beings, yet there exists a consistency of how collections of humans behave. Civilization would not be possible if not for common behavior that leads to emergent behavior.
So when we analyze a single neuron or cell, we should extract the common behavior shared by collections of neurons and see how that leads to predictable ensemble behavior.
A big problem is that too many people fixate on where to draw the line as to what is the basic building block. Humans have been doing this for centuries ever since Democritus conceived of the atom.
What is more important are the higher-level abstractions that describe how parts interact to create a whole. Deep Learning research has shown that the backpropagation mechanism is universally useful as long as we can compute the differential on each component in the network.
Indeed it is the case that researchers have discovered new network components that are more effective than others. But it is also the case that we find simpler components that perform just as well as more complex ones.
The development of computers has shown that there are simple universal components (NAND, NOR) that serve as the building block of all kinds of computational devices. C.S. Peirce was perhaps one of the first to recognize these universal components.
The problem that the Von Neumann architecture solved was how to realize human-designed execution plans into a system of NAND gates without having to rewire the entire system.
The Von Neumann architecture consists of a memory that would hold a finite instruction set and a processor that would interpret these instructions. Conditional evaluation, branching, and operations on memory and a program counter were all that were needed to achieve universality.
Thus was born Computer Science that built up higher and higher abstractions so humans could create more complex programs using these ensembles of NAND gates.
But what do we make of the brain? At what level of abstraction does it construct itself? It takes a very smart human to invent an Arithmetic Logic Unit (ALU) that sits in a CPU. How do brains bootstrap themselves?
There are always non-ending debates as to the atomic parts of brains. Yet everyone beats around the bush, unable to contemplate an explanation of how the brain bootstraps itself. Absent this explanation, we may never discover general intelligence unless...
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Damn, had to slow down from 2x to 1.5x to even understand @coecke
Haha, I agree, I also don't understand the difference between strong or weak emergence! And I'm writing a book with emergence in it's title! gum.co/empathy
I'm a bit perplexed about the achievement of Cyc. Have symbolic systems ever achieved a level of robustness to be of real utility? Not sure why this panel is all praise about it.
Yanic asked a good question that you can't just throw around buzzwords like 'abstractions' and 'semantics' without proposing an approach on how to achieve it. It's not clear how you get from symbolic manipulation into common sense.
Just as quantum mechanics is unintuitive to humans, it is likely that parallel distributed computation is also unintuitive. NAND gates are not intuitive. SK combinators are not intuitive. The building blocks of cognition are likely unintuitive as well.
Human minds are simply incapable of explaining how human minds work. At best we can explain the emergent properties, but not the underlying mechanisms.
Of course, we must have a good metaphor to partially explain human cognition. We need them so that we can formulate explanations for methods of teaching, decision-making, and idea generation. We cannot be blind to human cognitive nature.
It must be difficult being a neuroscientist. It's like being an alchemist before the periodic table was discovered.
Just like alchemists at the time of Newton, the tools and models to explore their domain are completely absent. You cannot make progress if you have no capability of observing and interpreting what's going on.
To be fair, neuroscience isn't about understanding cognition. It's about understanding the physical nature of the brain. Cognition is a virtual thing. The difference between hardware and software.