My Authors
Read all threads
What should be baffling for so many but isn't mentioned enough is how a neural network that is based on continuous mathematics leads to things like GPT-3 that works on discrete tokens.
3 years ago, I wrote a rebuttal blog post that argued why Deep Learning could be applied to NLP. This was in response to a post making the rounds that it was impossible. medium.com/intuitionmachi…
The argument against Deep Learning was that it is based on continuous function and thus cannot be applied to non-continuous things like words: linkedin.com/pulse/google-h…
My arguments however back then were based on empirical evidence in the field. DL in NLP was already paying dividends in 2017. What is missing of course is, why does it work?
What is it about language that obviously looks discrete, but surprisingly can be processed by something that works only in the continuous?
The problem is that the mathematical tools we employ are ones that originate from Newton. There is a gap in our folk knowledge as how the discrete and the continuous interact.
A little over 100 years ago, there was a debate as to which was more fundamental, logic or mathematics. In the eyes of the public, it was logic that won that debate.
In 1931, the supremacy of logic over mathematics was falsified. See: en.wikipedia.org/wiki/G%C3%B6de…
Reality is mathematical, in fact, as Turing has revealed is that reality is computational.
It was Bertand Russell who proposed logic over mathematics. On the other side of that debate was the American C.S.Peirce.
It is C.S. Peirce who came up with quantification that we use in predicate calculus. C.S.Peirce in fact had a graphical notation that expressed this:
The greatest of all American thinkers, C.S.Peirce, is unfortunately not well known. We have here a person who revealed the possibility of NAND and NOR gates as universal building blocks for logic, yet few know of him.
The artificial logic systems (i.e. computers) that we benefit from today, he understood how they would be built 100 years ago. In fact, he was prescient enough to foretell of the demise of GOFAI. This was over a half a century before A.I. was coined.
He formulated that human cognition consisted of induction, deduction and abduction. The first two we have artificial version of today. That is Deep Learning and Symbolic logic.
He was sufficiently way beyond his time that we need to ask. What did he understand about the interplay with the discrete and the continuous? Surely this paradox couldn't avoid his scrutiny.
How does C.S.Peirce thinking lead to an understanding of how continuous functions leads to discrete language understanding?
Let's explore how to frame current #transfomer language models in the framework of Pericean semiotics. gum.co/empathy
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Carlos E. Perez

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!