ML TLDR Profile picture
May 31, 2021 7 tweets 4 min read Read on X
"Attention is all you need" is one of the most cited papers in last couple of years. What is attention? Let's try to understand in this thread.

Paper link: arxiv.org/abs/1706.03762

#DeepLearning #MachineLearning #Transformers
In Self-attention mechanism, we are updating the features of a given point, with respect to other features. The attention proposed in this paper is also known as Scaled dot-product attention.
Lets say, our data point is a single sentence, we embed each word into some d-dimensional space, so we compute how each point is similar to each other point, and weigh its representation accordingly. The similarity matrix is just a scaled dot product!
In reality this is how we execute it. For each feature, we calculate 3 vectors, key, query and value. For a given feature, we take the dot product of its query with key vector of all the features and scale it to get the similarity matrix.
Then we take the soft-max on similarity matrix . Output is nothing but similarity-matrix weighted sum of value vectors! That's it! Very simple right?!
We can extend this mechanism tto multiple heads too! Check out this articlee for clear illustrtations and explanations - jalammar.github.io/illustrated-tr…

And credits for above images go to this blog. Check it out for step by step illustrations and code - towardsdatascience.com/illustrated-se…
Thank you for reading. If you think this thread helped you learn something new, do retweet and follow us!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with ML TLDR

ML TLDR Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MLsummaries

Jun 7, 2021
There are way too many papers on #MachineLearning and on #DeepLearning these days. How to choose which papers to read? A tiny thread 🧵
The first one is our absolute favorite. Arxiv sanity by none other than @karpathy!

Link: arxiv-sanity.com
The second one is by @labmlai. It is pretty new and the interface is pretty smooth!

Link: papers.labml.ai/papers/recent/
Read 6 tweets
Apr 7, 2021
This paper shares 56 stories of researchers in Computer Vision, young and old, scientists and engineers. Reading it was a cocktail of emotions as you simultaneously relate to the stories of joy,excitement,cynicism,and fear. Give it a read!

#ComputerVision
Some quotes from the stories - it was a "tough and hopeless time" in computer vision "before 2012, [when] the annual performance improvements over ImageNet are quite marginal."
"she told me you should solve the problem purely based on deep learning... I did not think the occlusion problem can be solved without explicitly reasoning of shape priors and depth ordering"
Read 8 tweets
Apr 7, 2021
Today we will summarize Vision Transformer (ViT) from Google. Inspired by BERT, they have implemented the same architecture for image classification tasks.

Link: arxiv.org/abs/2010.11929
Code: github.com/google-researc…

#MachineLearning #DeepLearning
The authors have taken the Bert architecture and applied it on an images with minimal changes.Since the compute increases with the length of the sequence, instead of taking each pixel as a word, they propose to split the image into some ’N’ patches and take each of them as token.
So first take each patch, flatten it (which will be of length P²C), and project it linearly to dimension D. And in the 0th position add a ‘D’ dimensional embedding which will be learnt. Add positional encoding to these embedding.
Read 9 tweets
Apr 6, 2021
Depending on the problem we are trying to solve, the loss function varies. Today we are going to learn about Triplet losses. You must have heard about it while reading about Siamese networks.

#MachineLearning #DeepLearning #RepresentationLearning
Triplet loss is an important loss function for learning a good “representation”. What’s a representation you ask? Finding similarity (or difference) between two images is hard if you just use pixels.
So what do we do about it - given three images cat1, cat2, dog, we use a neural network to map the images to vectors f(cat1), f(cat2), and f(dog).
Read 5 tweets
Mar 29, 2021
To get the intuition behind the Machine Learning algorithms, we need to have some background in Math, especially Linear Algebra, Probability & Calculus. Consolidating a few cheat-sheets here. A thread 👇
For Linear Algebra: Topics include Vector spaces, Matrix vector operations, Rank of a matrix, Norms, Eigenvectors and values and a bit of Matrix calculus too.

souravsengupta.com/cds2016/lectur…

(Advanced) cs229.stanford.edu/section/cs229-…
For Probability & Statistics: Random variables, expectation, Probability distributions and so on.

stanford.edu/~shervine/teac…

stanford.edu/~shervine/teac…

(Advanced) cs229.stanford.edu/section/cs229-…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(