A new paper on (i) how to connect high-level cognitive theories to neuroscience / neural nets, and (ii) how learners can construct new concepts like number or logic without presupposing them. "Church encoding" is the metaphor we need.
link.springer.com/article/10.100…
Kids learn concepts like number or logic, but it's not really clear how that's actually possible: what could it mean for a learner (or computer!) to *not* have numbers or logic? Can you build a computer without them? And what do these abstract things mean on a neural level?
Church encoding is an idea from mathematical logic where you use one system (lambda calculus) to represent another (e.g. boolean logic). It's basically the same idea as using sets to build objects that *act like integers*, even though they're, well, sets.
en.wikipedia.org/wiki/Set-theor…
The paper describes a church encoding *learner*. This learner takes some facts/observations about the world and tries to create a church encoding that mirrors the world using its own internal dynamics. A generative, productive, compositional mental model. (ht @yimregister) Image
My favorite way of showing this idea is people who program songs into old printers. They get the internal, inherent dynamics of a printer to do something else cool. That's basically church encoding. And, I argue, what you do when you learn.
But for brains, we need the most general form of learning possible because people can learn lots of different things. The paper argues that combinatory logic is nice formalism for this because it's Turing-complete and compositional, like much of thought.
The paper shows how domains like number, logic, dominance relations, domain theories, family trees, grammar, recursion, etc. can be constructed by a church-encoding system. And the paper shows that combinatory logic structures often generalize in the right ways. Image
For instance, in number, you could see the first few numbers and induce a system isomorphic to natural numbers, without having them to start. There are a bunch of (bad) arguments in cogsci claiming that's *not possible* even in principle! Image
The system generalizes sensibly because constructing these representations is essentially the same as learning short programs that explain the data you see. These ideas connect closely to inductive systems that work over computations, dating back to Solomonoff.
Combinatory logic itself is super cool: it was developed in the early 1900s to avoid using variables in logic. Variables are a goddamn nightmare because they change the *syntax* of the system. You can use "x" and "y" in defining f(x,y), but you only get "x" in defining f(x).
This makes f(x,y)-like notation a pain to handle. What early logicians figured out that in fact you never NEED variables. Instead, you can translate equations with variables into compositions of functions (combinators) that have NO variables anywhere. Image
This fact that variables are never necessary is a nice argument against the importance of explicit algebra-like variables (@GaryMarcus). Combinatory logic is a system where variable-like behavior can provably be achieved without any variables at all in the representation.
In this sense, the cog/neuro interest in variables has been misled by the syntax we happened to use in algebra. If math education had adopted combinatory logic-like syntax, we never would have thought that explicit, algebra-like variables were difficult or important.
This connects to neuroscience because variables, structures, and human-like computational flexibility seems hard to get into neural networks. The fact that combinatory logic uses a simple, uniform syntax without variables help it be encodable in biologically-inspired systems.
In fact, all you need to do (in e.g. a neural network) is have a means to represent binary trees and some simple operations on them, both of which have been around in neural networks for several decades. Combinatory logic works like an assembly language.
Then, with those, and the church-encoding idea, all of the structures we talk about in cognitive science--like logic, number, structures, grammars, hierarchies, computational processes, etc.---are within reach.
So, a neural version of the representation is easy in principle; what's missing so far is a neural version of the learning. Any interest, @DeepMind or @OpenAI?
At the same time, church encoding addresses some key questions about meaning. When cognitive scientists state a logical theory like lift(x,y) is cause(x,go(y,up)), what do those things mean e.g. neurally? It's hard to see, and that's a big part of why the fields don't connect.
In church encoding (and related philosophical ideas), the meaning of a symbol is determined by how it interacts with other symbols, which is defined by the combinators' dynamics. So church encoding can avoid dodging fundamental problems about the meaning of symbols.
The paper most generally tries to show how the many seemingly contradictory approaches to cognition are compatible and may interrelate. Image
(A pdf is available here: colala.berkeley.edu/papers/piantad…)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with steven t. piantadosi

steven t. piantadosi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @spiantado

28 Jan
The GameStop $GME news is too funny not to write a thread about.

It's also the most interesting intersection of psychology and politics going on.
I think the best overview of what's happening is this NYT article:
nytimes.com/2021/01/27/bus…
Basically, some large companies bet that GameStop stock would go down. To most financial people, this would seem like a good bet: most video games are sold online now and physical stores are especially hurt by the pandemic. Video game stores aren't looking great.
Read 24 tweets
2 Oct 20
This article by @JeffFynnPaul has been going around. He argues that it’s a “myth” that Europeans took land from Native Americans. @cakrolik and I read it and it’s one of the worst argued pieces we’ve ever seen. Here is our thread.
spectator.co.uk/article/the-my…
The overall claim of his article is that liberals have perpetuated a “Myth of the Stolen Country” that the US “was founded by a monumental act of genocide, accompanied by larceny on the grandest scale.”
He compares the popularity of this “myth” in the US to ideology in Nazi Germany (Godwin’s law?) and communism. Image
Read 58 tweets
14 Sep 20
I am so excited that this new paper with @samisaguy is out. We explain how humans perceive number by mathematically deriving an optimal perceptual system with bounded information processing. Four N=100 pre-registered experiments match the predictions.
nature.com/articles/s4156… Image
People have long hypothesized that there are two systems of number perception: an exact system that lets us perfectly perceive numbers less than about 4, and an approximate one for large numbers. Image
This observation dates back to William Jevons, who published an experiment in Nature in 1871 where he literally threw beans into a box and saw how accurately he guessed the number. He estimated low numbers perfectly, but could only approximate those above 4 or 5. ImageImage
Read 14 tweets
29 Jul 20
Super excited to talk tomorrow (July 30th, 3pm pacific) at Abralin ao Vivo about joint with Yuan Yang. I'll be presenting a long-running project on language acquisition that tackles language learnability questions with Bayesian program learning tools.

Here's a summary thread.
This is part of an amazing remote talk series by @abralin_oficial presenting language work all summer
abralin.org/site/en/evento…
Our project studies how program learning tools can acquire natural language structures from positive evidence alone. We show that learners can *construct* grammatical devices for producing finite-state, context-free, and context-sensitive grammars to explain data they see.
Read 17 tweets
29 Jul 20
New result published by @SpringerNature has proven mathematically that homeopathy works.

I had to tweet about this paper. Image
Homeopathy is often thought to be "natural medicine." In truth, it's not anything medicine. Homeopathy is actually a remarkable delusion: the idea is that you take a substance and dilute it in water until there are no molecules of the substance left.
en.wikipedia.org/wiki/Homeopathy
The "memory" of the substance is supposed to heal you.
Read 23 tweets
8 Jul 20
The ideas in the @Harpers letter were destroyed on Twitter yesterday. Here's a thread/meta-thread.
Yesterday, a variety of public figures, from @jk_rowling to Malcolm @Gladwell to @SalmanRushdie signed a letter in Harper’s saying that "The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted."
harpers.org/a-letter-on-ju…
Without feeling the need to bring any data, the writers of this letter opaquely referenced recent events like...
Read 35 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!