Here is a thread in paper references demonstrating that current deep learning, especially transformers are amazingly powerful for symbol manipulation already:

@GuillaumeLample, @f_charton
arxiv.org/abs/1912.01412
Solving hard integrals using deep transformers

🧵1/n
A paper of my team at google, improving reasoning in large theories using contrastive learning for premise selection:

arxiv.org/abs/1606.04442
Another paper by Sarah Loos by our team on improving superposition based first order logic theorem provers using deep learning:

arxiv.org/abs/1701.06972

3/n
A paper by Dennis Lee et al (from our team) demonstrating that latent representation of formulas can be used to perform long chains of non-trivial reasoning purely using vectors:

arxiv.org/abs/1909.11851
One of the first papers that demonstrates the most tasks relevant for high level mathematics can be trained in an unsupervised manner from existing mathematical formulas, including conjecturing and finding missing proof steps:

arxiv.org/abs/2006.04757
Again, previously unimaginable, transformers can generate long satifying traces of LTL formulas, by Christopher Hahn et al

openreview.net/forum?id=dOcQK…
GPT-f used to produce complete metamath proofs by @spolu and @ilyasut

arxiv.org/abs/2009.03393
A complete HOL theorem prover based purely on RL and GNNs, solves over 60% of all of HOL Light library fully automatically.

This dataset contains a lot graduate level mathematics.

Some proofs are significantly more elegant than their human counterparts.

arxiv.org/abs/1905.10006
Improving lean automated theorem proving by unsupervised training of transformers on intermediate proof terms:

arxiv.org/abs/2102.06203
Solving some Olympiad problems with GPT-f:

openai.com/blog/formal-ma…
Connecting experience with fomulas, by deep symbolic regression:

arxiv.org/abs/2201.04600
Full-blown state-of-the-art first order logic theorem proving by DeepMind, trained using a lot of synthetic data and hindsight experience replay:

arxiv.org/abs/2103.03798
Solving mathematics problems given in natural language using GPT-transformers and a lot of manual prompt engineering:

arxiv.org/abs/2111.08267

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Christian Szegedy

Christian Szegedy Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(