Stumbled upon this new videocast with @GaryMarcus @luislamb @ykilcher . I must say that introduction was quite good.
I'm a bit perplexed about the achievement of Cyc. Have symbolic systems ever achieved a level of robustness to be of real utility? Not sure why this panel is all praise about it.
Yanic asked a good question that you can't just throw around buzzwords like 'abstractions' and 'semantics' without proposing an approach on how to achieve it. It's not clear how you get from symbolic manipulation into common sense.
The host (keith duggar?) makes a good point that language models are continuous on the inside and discrete on the outside.
I guess the root of the problem of the current hostility between symbolists and connectionists is the competition for grant money. It's a problem in general of many fields that must incentivize maximalist approaches.
Research funding isn't very kind to interdisciplinary or pluralistic approaches.
I also don't know what to make of the argument that all of computer science is symbolic and therefore it must be valuable for general intelligence. Conventional computer science is extremely valuable, but general intelligence is something different.
A massive amount of effort has been spent to craft semantic knowledge bases. Yet they remain extremely brittle. Search engines have fared much better with respect to utility. Strictness is a nice thing to have, but it's inflexible in real-world applications.
The world is really messy and navigating this world requires the same kind of general intelligence that you would find in autonomous systems. None of today's AI is nowhere near as competent as a honey bee.
Does symbolic reasoning or semantic knowledge help in achieving autonomous systems? I doubt it does.
I honestly don't know where these guests have been in the past year! Have they not played around with GPT-3? Yes, language is discrete, but it's interpretation and thus understanding is continuous.
Proof systems already provide mathematical proofs that are beyond the capabilities of humans. But these systems don't move the needle in developing general intelligence.
Well, at least they are looking at transformer and graphical networks. Which incidentally are both deep learning approaches!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

13 Jun
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
Keneth Stanley @kenneth0stanley actually has a much better construction which he presents in this paper. dl.acm.org/doi/abs/10.114…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Read 10 tweets
12 Jun
Interview with Walid Saba who appears to have a core ontology with just 2,000 types.
This is his layering. What you will note is that the middle layers are 'affordances' of the nouns in their arguments.
Where he has some primitive types. @markburgess_osl has a similar set.
Read 5 tweets
12 Jun
Tim Scarfe @ecsquendor with an interview with the mad man @coecke
Damn, had to slow down from 2x to 1.5x to even understand @coecke
Haha, I agree, I also don't understand the difference between strong or weak emergence! And I'm writing a book with emergence in it's title! gum.co/empathy
Read 7 tweets
11 Jun
Just as quantum mechanics is unintuitive to humans, it is likely that parallel distributed computation is also unintuitive. NAND gates are not intuitive. SK combinators are not intuitive. The building blocks of cognition are likely unintuitive as well.
Human minds are simply incapable of explaining how human minds work. At best we can explain the emergent properties, but not the underlying mechanisms.
Of course, we must have a good metaphor to partially explain human cognition. We need them so that we can formulate explanations for methods of teaching, decision-making, and idea generation. We cannot be blind to human cognitive nature.
Read 15 tweets
11 Jun
It must be difficult being a neuroscientist. It's like being an alchemist before the periodic table was discovered.
Just like alchemists at the time of Newton, the tools and models to explore their domain are completely absent. You cannot make progress if you have no capability of observing and interpreting what's going on.
To be fair, neuroscience isn't about understanding cognition. It's about understanding the physical nature of the brain. Cognition is a virtual thing. The difference between hardware and software.
Read 7 tweets
11 Jun
Existing models of neurons or even single cells are woefully inadequate to simulate what's going on in the brain. Standard models are based on toy models that are conveniently easy to simulate. Scientific research has a bias toward the tools it has at its disposal.
However, we also should not underestimate the complexity that simple components can generate. Conversely, we can't ignore the consistency of behavior that a collection of complex parts generate.
The truth about general intelligence like the brain lies somewhere in between. Humans are complex beings, yet there exists a consistency of how collections of humans behave. Civilization would not be possible if not for common behavior that leads to emergent behavior.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(