Yann LeCun Profile picture
12 May, 12 tweets, 3 min read
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning.
By Adrien Bardes, Jean Ponce, and yours truly.
arxiv.org/abs/2105.04906
Insanely simple and effective method for self-supervised training of joint-embedding architectures (e.g. Siamese nets).
1/N
TL;DR: Joint-embedding archis (JEA) are composed of 2 trainable models Gx(x) and Gy(y), trained with pairs of "compatible" inputs (x,y).
For ex: x and y are distorted versions of the same image, successive sequences of video frames.
The main difficulty is to prevent collapse
2/N
VICReg is a loss for JAE with 3 terms:
1. Variance: Hinge loss to maintain the std-dev of each component of Gx(x) & Gy(y) above a margin
2. Invariance: ||Gx(x)-Gy(y)||^2
3. Covariance: sum of the squares of the off-diag terms of the covariance matrices of Gx(x) and Gy(y).
3/N
#1 is the innovation in VICReg.
#2 is classical.
#3 is inspired by Barlow Twins: It pulls up the information content of the embedding vectors by decorrelating their components.
4/N
Image
Competitive results on ImageNet with linear classifier head, on transfer tasks, and on semi-supervised ImageNet in the low-data regime. Essentially on par with BYOL, Barlow Twins. slightly above SwAV without multicrop, and slightly below SwAV with multicrop.
6/N Image
Results on transfer tasks.
7/N Image
The hinge-on-variance term prevents collapse and alleviates the need for batch-norm or a predictor when added to methods such as SimSiam.
8/N Image
Pytorch pseudocode:
9/N Image
Main features:
- Super-simple method with well-controlled collapse prevention (see PyTorch pseudo-code) through the hinge loss.
- No need for batch-wise or channel-wise normalization (though it helps a bit).
10/N
- No need for shared weights between the 2 branches, although in our experiments, the weights are shared.
- No need for moving-averaged weights, stop-gradient, predictor, negative mining (it's non contrastive!), memory banks, nearest neighbor, nor quantization/distillation.
11/N
Adrien Bardes is a resident PhD student at FAIR-Paris & Inria/ENS, co-advised by Jean Ponce (Inria/ENS) and myself (FAIR/NYU).
12/N
N=12.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yann LeCun

Yann LeCun Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ylecun

6 Jul
There were two patents on ConvNets: one for ConvNets with strided convolution, and one for ConvNets with separate pooling layers.
They were filed in 1989 and 1990 and allowed in 1990 and 1991.
1/N
We started working with a development group that built OCR systems from it. Shortly thereafter, AT&T acquired NCR, which was building check imagers/sorters for banks. Images were sent to humans for transcription of the amount. Obviously, they wanted to automate that.
2/N
A complete check reading system was eventually built that was reliable enough to be deployed.
Commercial deployment in banks started in 1995.
The system could read about half the checks (machine printed or handwritten) and sent the other half to human operators.
3/N
Read 9 tweets
10 Jun
Very nice work from Google on deep RL- based optimization for chip layout.
Simulated annealing and its heirs are finally dethroned after 40 years.
This uses graph NN and deConvNets, among other things.
I did not imagined back in the 90s that (de)ConvNets could be used for this.
This is the kind of problems where gradient-free optimization must be applied, because the objectives are not differentiable with respect to the relevant variables. [Continued...]
In this application, RL is used as a particular type of gradient-free optimization to produce a *sequence* of moves.
It uses deep models learn good heuristics as to what action to take in every situation.

This is exactly the type of setting in which RL shines.
Read 4 tweets
12 Mar
@mcCronjaeger @BloombergME The list is much too long for a Twitter thread.
I'll leave that for FB's comm people to do.
@mcCronjaeger @BloombergME More importantly, the whole premise of the article is wrong.
The SAIL / Responsible AI group's role *never* was to deal with hate speech and misinformation.
That's in the hands of other groups with *hundreds* of people in them.
In fact, "integrity" involves over 30,000 people...
@mcCronjaeger @BloombergME So the central theme of the article, that RespAI wasn't given the necessary resources to do its job is patently false.

Second, AI is heavily used for content moderation: filtering hate speech, polarizing content, violence, bullying, etc...
Read 10 tweets
13 Jan
Electricity production in Europe in 2020.

Right:
Each colored point-cloud is a country
Each point (x,y) is 1 hour of electricity production with x=energy produced in kWh; y=CO2 emission in g/kWh.

Left:
bar graphs of the mix of production methods for select countries.

1/N
France: low overall CO2 emissions, low variance on emissions, relying essentially on nuclear energy with a bit of hydro [reminder: nuclear produce essentially no CO2].
2/N
Germany: despite having a large proportion of renewables, has high emissions and a high variance of emissions: when there is no wind nor sun, it has to rely on fossil fuel, having abandoned and phased out nuclear production.
3/N
Read 6 tweets
23 Jun 20
I'm an immigrant.

I came first to work at Bell Labs on a J-1 visa, because I thought I'd stay only a year or two.
But I stayed longer and got an H1-B visa.
Then I got a green card....
1/N

nytimes.com/2020/06/22/us/…
I hesitated to take up citizenship during the GW Bush years, waiting for the country to become respectable again.
But after Bush's re-election, I just wanted to be able to vote and kick out the neocon bastards.
So I became a citizen just in time to vote for Barack Obama.
2/N
As an immigrant, scientist, academic, liberal, atheist, and Frenchman, I am a concentrate of everything the American Right hates.
3/N
Read 8 tweets
22 Jun 20
@timnitGebru If I had wanted to "reduce harms caused by ML to dataset bias", I would have said "ML systems are biased *only* when data is biased".
But I'm absolutely *not* making that reduction.
1/N
@timnitGebru I'm making the point that in the *particular* *case* of *this* *specific* *work*, the bias clearly comes from the data.
2/N
@timnitGebru There are many causes for *societal* bias in ML systems
(not talking about the more general inductive bias here).
1. the data, how it's collected and formatted.
2. the features, how they are designed
3. the architecture of the model
4. the objective function
5. how it's deployed
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(