Want to dive into #NeurIPS2021 but don't know where to start?

Here're some ideas! A thread🧵👇
1. "A 3D Generative Model for Structure-Based Drug Design" is one of the multiple papers at NeurIPS about drug discovery using neural networks.

This model generates molecules that bind to a specific protein binding site.

By Shitong Luo et al.

papers.nips.cc/paper/2021/has…
2. "The Emergence of Objectness: Learning Zero-shot Segmentation from Videos" by Runtao Liu et al.

Leveraging clever self supervision with videos to segment objects without labels.

papers.nips.cc/paper/2021/has…
3. "Multimodal Few-Shot Learning with Frozen Language Models" by @jacobmenick @serkancabi @arkitus @OriolVinyalsML @FelixHill84 .

Freeze a pre-trained LM and train a vision encoder for prompting the LM to perform vision/language tasks.

papers.nips.cc/paper/2021/has…
4. "Efficient Training of Retrieval Models using Negative Cache" by Erik Lindgren et al.

A proposal to train dense retrieval without needing huge batches and a lot of memory.

papers.nips.cc/paper/2021/has…
5. "VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text" by @AkbariH70 Liangzhe Yuan @RuiQian3 Wei-Hong Chuang, Shih-Fu Chang, @YinCui1 @BoqingGo.

The promised multimodal future is here!

papers.nips.cc/paper/2021/has…
6. "Robust Predictable Control" by @ben_eysenbach, @rsalakhu and @svlevine.

Seing RL through the lens of compression. How do agents behave when they favor policies that are compressible (i.e. easy to predict)?

papers.nips.cc/paper/2021/has…
7. "FLEX: Unifying Evaluation for Few-Shot NLP" by Jonathan Bragg et al.

Now we can apples-to-apples comparisons of few-shot performance!

papers.nips.cc/paper/2021/has…
8. "Partition and Code: learning how to compress graphs" by @gbouritsas @loukasa_tweet @AspectStalence @mmbronstein

How would you build a native compression algorithm for graphs? -> partition and code

papers.nips.cc/paper/2021/has…
9. "Learning to Draw: Emergent Communication through Sketching" by @DanielaMihai13 and @jon_hare

How two models pretty much learn to play Pictionary.

papers.nips.cc/paper/2021/has…
10. "Fixes That Fail: Self-Defeating Improvements in Machine-Learning Systems" by Ruihan Wu et al.

Watch out! Improving parts of an ML model might lead to worse overall performance...😔

papers.nips.cc/paper/2021/has…
Read our comments on this selection and more interesting papers on our blog zeta-alpha.com/post/neurips-2…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Zeta Alpha @ NeurIPS 2021

Zeta Alpha @ NeurIPS 2021 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ZetaVector

9 Nov
In his @NVIDIAGTC keynote Jensen Huang demonstrates @NVIDIAAI leading position in powering the AI ecosystem in R&D, enterprise and edge computing, with a zillion new announcements. Here's a few notable ones.
Graph Neural Network acceleration with CUDA-X.
Nemo Megatron allows training GPT-3 scale Large Language Models on distributed hardware.
Read 5 tweets
8 Nov
At #EMNLP2021 Evelina Fedorenko makes a strong case to defuse criticism that neural language models cannot "think". Neither can the human language modules in the brain, she argues, based on human brain studies. #EMNLP2021livetweet
In contrast, due it's predictive coding nature, language is inherently very well-suited to communication. #EMNLP2021livetweet
As far as human brain studies suggest, language is *not suitable for complex thought*, Fedorenko concludes her keynote at #EMNLP2021, as she outlines her future research. #EMNLP2021livetweet
Read 5 tweets
27 Sep
Catch up on recent AI research and code highlights - join @ZetaVector this Friday 1 Oct at 15:00 CET for the monthly "Navigating Current Trends and Topics in AI" webinar.

zoom.us/webinar/regist…
Expect to learn how Pupil Shapes Reveal GAN-generated Faces, Makeup against Face Recognition, Multimodal Prompt Engineering, CNNs vs Transformers vs MLPs, Primer Evolved Transformer, FLAN, and whether MS MARCO has reached end of life neural retrieval, and much more...
Check out some of the trending topics in AI / ML twitter right now:

search.zeta-alpha.com/?q=&d=lm&sort_…
Read 4 tweets
12 Dec 20
In typical space-cowboy style, @ylecun, donning no slides, but only a whiteboard on Zoom, explains how all the various self-supervised models can be unified under an Energy Based view. #NeurIPS #SSL workshop
In fact, @ylecun sketches that the probabilistic view of loss functions for self-supervised training is harmful us as it concentrates all probability mass on the data manifold, obscuring our navigation in the remaining space. #NeurIPS #SSL workshop
En passant, @ylecun points out the trick why BYOL by Grill et al. from @DeepMind does not collapse despite the lack of negative examples: a magic batch normalization.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(