, 9 tweets, 4 min read
My Authors
Read all threads
The founding text of the 1980s version of “neural” network nonsense was titled Parallel Distributed Processing. Its important central idea was forgotten because people latched onto the easy-to-understand error backpropagation algorithm instead.

amazon.com/Parallel-Distr…
We have only the vaguest idea of what neurons do. (Almost certainly not backprop!)

What we do know is that they do whatever it is ludicrously slowly. Also, there are an awful lot of them.

Taking this seriously lets you dismiss most theories of cognition, and of human being.
Neural networks *can’t* be deep, because the signal propagation delay through a neuron is ~10-20ms. We can do some “high level” tasks like object recognition in <150ms. That means signal paths through networks of actual neurons can’t be much more than 10 deep.
By the way, while I think of it, did you know that the economist and political philosopher Friedrich von Hayek figured out key “neural” network learning algorithms in the 1920s?

en.wikipedia.org/wiki/Friedrich…
Rationalist theories that suppose (explicitly or implicitly) that our activities derive from chains of inference can’t be true. Neurons are much too slow for that.

That’s evolutionarily fine because propositional inference is mostly useless in nature (without technology) anyway.
The evolutionary task is: what are the meaningful features of this situation I’m in right now? What possibilities do they afford?

How can a depth-10 circuit compute this?

Taking “Parallel Distributed Processing” seriously: only by considering all possibilities simultaneously.
Humans are *terrible* at reasoning. The one thing we are extraordinarily good at is bringing to bear relevant “background understanding” on everything we encounter.

@vervaeke_john argues, persuasively, that this is THE central issue for cognitive science: ipsi.utoronto.ca/sdis/Relevance…
@vervaeke_john Because Heidegger.

Did you listen to @vervaeke_john’s talk about that? (I recommended it an hour ago.)

Or you could read the explanation in this TRUE story about how I became a character in a Ken Wilber novel, because Heidegger. meaningness.com/metablog/ken-w…
@vervaeke_john This was a central point in my PhD research: how do we get intelligent real-time activity while taking seriously the constraint that neural processing is extraordinarily slow?
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with David Chapman

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!