Come to our talks and posters at #ICLR2021 to discuss our findings on understanding and improving deep learning! Talks and posters are available now! Links to the talks, posters, papers and codes in the thread:

1/7
When Do Curricula Work? (Oral at #ICLR2021)
with @XiaoxiaWShirley and @ethansdyer

Paper: openreview.net/forum?id=tW4QE…
Code: github.com/google-researc…
Video and Poster: iclr.cc/virtual/2021/p…

2/7
Sharpness-Aware Minimization for Efficiently Improving Generalization (Spotlight at #ICLR2021 )
with @Foret_p, Ariel Kleiber and @TheGradient

Paper: openreview.net/forum?id=6Tm1m…
Code: github.com/google-researc…
Video and Poster: iclr.cc/virtual/2021/p…

3/7
Understanding the Failure Modes of Out-of-Distribution Generalization (Poster at #ICLR2021 )
with @_vaishnavh and @AJAndreassen

Paper: openreview.net/forum?id=fSTD6…
Code: github.com/google-researc…
Video and Poster: iclr.cc/virtual/2021/p…

4/7
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers (Poster at #ICLR2021 )
with @PreetumNakkiran and @HanieSedghi

Paper: openreview.net/forum?id=guetr…
Code: github.com/preetum/deep-b…
Video and Poster: iclr.cc/virtual/2021/p…

5/7
Are wider nets better given the same number of parameters? (Poster at #ICLR2021 )
with @_anna_go and @guygr

Paper: openreview.net/forum?id=_zx8O…
Code: github.com/google-researc…
Video and Poster: iclr.cc/virtual/2021/p…

6/7
Extreme Memorization via Scale of Initialization (Poster at #ICLR2021 )
with @n0royalroad and @AshokCutkosky

Paper: openreview.net/forum?id=Z4R1v…
Code: github.com/google-researc…
Video and Poster: iclr.cc/virtual/2021/p…

7/7

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Behnam Neyshabur

Behnam Neyshabur Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @bneyshabur

13 Jan
Some people say that one shouldn't care about publication and the quality matters. However, the job market punishes those who don’t have publications in top ML venues. I empathize with students and newcomers to ML whose good papers are not getting accepted. #ICLR2021
1/
Long thread at the risk of being judged:

I just realized that in the last 6 years, 21 of my 24 papers have been accepted to top ML conf in their FIRST submission even though the majority of them were hastily-written borderline papers (not proud of this). How is this possible?
2/
At this point, I'm convinced that this cannot be explained by a combination of luck and quality of the papers. My belief is that the current system has lots of unnecessary and sometimes harmful biases which is #unfair to new comers and anyone who is outside of the "norm".
3/
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!