Bojan Tunguz Profile picture
Aug 5 5 tweets 3 min read
A very good paper I came across this morning by the @DeepMind researchers. For the past five years Transformers have been one of the most dominant approaches to Deep Learning problems, especially in the #NLP domain.

1/5
However, despite many interesting papers on the topic, and lots of good open code, there has been a noticeable lack of *formal* definition of what transformed are, especially on the level of pseudocode.

2/5
This paper aims to rectify that. It provides pseudocode for almost all major Transformer architectures, including training algorithms.

3/5
One of the main benefits of having this pseudocode available is for the researchers who want to expand and modify some aspects of the transformer architecture, and go beyond any particular framework or even ML paradigm.

4/5

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Bojan Tunguz

Bojan Tunguz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tunguz

Jul 22
The longer you work with ML algorithms, the more you appreciate what an outsize effect your *data* has on the quality of your models. I've seen that shift on Kaggle over the years, where more and more time is spent on some kind of dataset augmentation.

1/5
There is still only so much you can do there, and unless you are "enterprising" and decide to scrape the competition host's website for their dat (yes, this has happened) your legitimate options are rather limited.

2/5
Outside of the Kaggle world, however, things are different. Large computational resources and advanced algorithms still dominate the ML discourse, but those who are paying attention know that neither of them would be worth much without the huge datasets that are bing used.

3/5
Read 5 tweets
Jul 22
It's actually scary how ignorant academics who try to do research on NNs for tabular data are of tabular data. I think that part of the problem is that almost all of the interesting and relevant tabular data problems are in the industry, 1/3
and academics tend to be completely inured from any kind of practical application of ML/DS.

If you are an academic who is interested in doing research on tabular data,

2/3
I would BEG YOU, FOR THE LOVE OF EVERYTHING THAT IS DECENT, PLEASE, PLEASE PLEASE GET OUT OF YOUR IVORY TOWER AND TRY TO LEARN WHAT KINDS OF PROBLEMS ACTUAL DATA SCIENTISTS DEAL WITH IN THEIR PROFESSIONAL LIVES!!!

3/3
Read 4 tweets
Jul 2
This week @Google researchers announced Minerva, an internally developed project that can answer mathematical questions and tackle other complex topics such as physics.

1/5
This project makes some really impressive gains with automatic NLP approach to tackling the challenging quantitative reasoning problems. Minerva is a large language model pretrained on general natural language data and further trained on technical content.

2/5
The model achieves state-of-the-art performance on technical benchmarks without the use of external tools.

3/5
Read 5 tweets
Jul 1
Neural Networks and Deep Learnig have been an incredible Machine Learning breakthrough(s), both in terms of extending the scope of what we can do with Machine Learning, as well as their practical utility. They have more or less become synonymous with Artificial Intelligence.

1/5
In the fields of Computer Vision and Natural Language Processing in particular they have only gone from strength to strength. I for one am really excited about these developments, and am really bullish about what else we may achieve in the upcoming years.

2/5
I don’t see us hitting any walls there, either currently, or in the near future. All the SOTA work so far has indicated that there are no diminishing returns on how much we can get out of large models.

3/5
Read 5 tweets
Jun 23
The way I see it, the two most important features of Neural Networks that make them so powerful are 1. Differentiability and 2. Compositionality.

1/7
Differentiability enables optimization using gradient descent, which is orders of magnitude faster than most other numerical optimization methods.

2/7
Compositionality, on the other hand, means that we can make use of the chain rule for differentiation, and break down potentially unwieldy functions into small manageable units that we can handle one at the time.

3/7
Read 7 tweets
Jun 9
Machine Learning for tabular data is equal parts art and science One of the main reasons for this is that tabular datasets come in all shapes and sizes, and there are no approaches that re *general* enough to apply in all circumstances.

1/9
For instance, there are no large pertained models for tabular datasets, and transfer learning is for all practical purposes nonexistent.

2/9
Furthermore, there are a few aspects of how tabular data is prepared that have a disproportionate impact on the performance of the Machine Learning model, far more so than in other domains. Some of the considerations include (but are by no means exhaustive):

3/9
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(