This paper shares 56 stories of researchers in Computer Vision, young and old, scientists and engineers. Reading it was a cocktail of emotions as you simultaneously relate to the stories of joy,excitement,cynicism,and fear. Give it a read!
Some quotes from the stories - it was a "tough and hopeless time" in computer vision "before 2012, [when] the annual performance improvements over ImageNet are quite marginal."
"she told me you should solve the problem purely based on deep learning... I did not think the occlusion problem can be solved without explicitly reasoning of shape priors and depth ordering"
"deep learning-based systems are trained for very specific objectives, and are far from resembling anything that could be considered a general model"
"students are now stuck in this deep learning mode of thought, unable to consider other approaches. This narrow perspective – and a perceived focus on beating benchmarks as opposed to advancing science"
"because the system is so much of a black box,trying to build in explainability and transparency into the system feels inherently futile sometimes. The most we can do is really focus on what inputs are going into the system, weights, and training data."
"Having something accepted appears to be more important than having something good accepted"
"After I left their home,my girlfriend’s father... expressed his concern against our relationship.He thought it is hard for me in this field to find a good job in the US, even in China."
"Industry is
especially excited to be involved, as everyone wants to advertise that their product uses A.I. and is therefore faster,
and smarter, than the products of their competitors"
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today we will summarize Vision Transformer (ViT) from Google. Inspired by BERT, they have implemented the same architecture for image classification tasks.
The authors have taken the Bert architecture and applied it on an images with minimal changes.Since the compute increases with the length of the sequence, instead of taking each pixel as a word, they propose to split the image into some ’N’ patches and take each of them as token.
So first take each patch, flatten it (which will be of length P²C), and project it linearly to dimension D. And in the 0th position add a ‘D’ dimensional embedding which will be learnt. Add positional encoding to these embedding.
Depending on the problem we are trying to solve, the loss function varies. Today we are going to learn about Triplet losses. You must have heard about it while reading about Siamese networks.
Triplet loss is an important loss function for learning a good “representation”. What’s a representation you ask? Finding similarity (or difference) between two images is hard if you just use pixels.
So what do we do about it - given three images cat1, cat2, dog, we use a neural network to map the images to vectors f(cat1), f(cat2), and f(dog).
To get the intuition behind the Machine Learning algorithms, we need to have some background in Math, especially Linear Algebra, Probability & Calculus. Consolidating a few cheat-sheets here. A thread 👇
For Linear Algebra: Topics include Vector spaces, Matrix vector operations, Rank of a matrix, Norms, Eigenvectors and values and a bit of Matrix calculus too.