Yann Dubois Profile picture
Nov 28 11 tweets 10 min read
#NeurIPS2022
What are ideal representations for self-sup. learning (SSL)?

🤓We give simple optimality conditions and use them to improve/understand/derive SSL methods!

🔥outperform baselines on ImageNet

arxiv.org/abs/2011.10566
w. @tatsu_hashimoto @StefanoErmon @percyliang
🧵
Goal: ideally representations should allow linear probes to perfectly predict any task that is invariant to augmentations in the most sample-efficient way

Q: Which of the following representation is optimal?

2/8
A: last one.

More generally we show that representations are optimal if and only if:
1. *Predictability*: linear probes can predict equivalence classes
2. *High dimension*: representation dim d=# equiv-1
3. *Invariance*: representation of equivalent examples collapse

3/8
Key: ideal SSL = supervised classification from high dim. space to equiv. classes using probing architecture

This leads to a unifying SSL framework (contrastive or not) with actionable insights eg how to
- choose projection heads
- choose dim.
- simplify non-contrast. SSL
4/8
**Dimension**

We just showed that the dimensionality of representation should ideally be number of equivalence classes => much larger than currently

Smartly increasing dimension has a huge impact on performance without increasing parameters!!

≥ 2% acc gains on ImageNet
5/8
**Projection heads**

Current SSL uses 2 siamese networks with MLP projection heads

We prove that one head should be linear

Intuition: representations should be pretrained as they will be used downstream.
linear probing => one linear projection

This gives ≥ 1% acc gains
6/8
**Non-contrastive SSL**

We show that most prior non-contrastive objectives are approximations of optimal SSL

We provide DISSL: a much simpler objective (no stop-gradients / no EMA / no Sinkhorn) that better approximates optimal SSL

DISSL outperforms SwAV/DINO
7/8
Other actionable insights in the paper eg:
- how to perform SSL for non-linear probes
- choosing augmentations

If you are at #NeurIPS2022 come to our poster Hall J #905 tomorrow 4-6pm

Code and pretrained ImageNet models: github.com/YannDubs/Invar…
8/8
Many ideas come from prior work with great collaborators
-ideal supervised repr. arxiv.org/abs/2201.00057
-ideal robust repr. arxiv.org/abs/2201.00057
-invariance&compression arxiv.org/abs/2106.10800
@douwekiela @davidjschwab @rama_vedantam @YangjunR @cjmaddison Ben @karen_ullrich
**Edit** correct link is arxiv.org/abs/2209.06235

That’s the problem when you have too many arxiv tabs open 😅

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yann Dubois

Yann Dubois Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(