I keep seeing hastily analyses of this experiment. Let me put mine among them, because it's not as bad as it looks.

Here are the numbers: 99 papers were both accepted by A and B, 94 were accepted by A but rejected by B and 105 were rejected by A but accepted by B.
But in the real life, B does not exist and only ~200 papers would have made it!

So on average, among the accepted paper (as decided by A), 1/2 (99/(99+94)) got there because they're "universally good", 1/2 (94/(99+94)) because of luck. And about 1/2 (105/(99+94)) were unlucky.
Extend that to the full conference: if we assume 25% acceptance rate, then 13% of all submissions are accepted because they're really good. 13% are accepted because they're lucky, 13% are rejected because they're unlucky and 60% are rejected because they're not good enough.
Honestly, it's way better than what I expected, and I'm not sure the process can be improved very much...

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with David Picard

David Picard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @david_picard

22 Nov
Here's is an unusual arxiv of mine: "Non asymptotic bounds in asynchronous sum-weight gossip protocols", arxiv.org/abs/2111.10248
This is a summary of unpublished work with Jérôme Fellus and Stéphane Garnier from way back in 2016 on decentralized machine learning.
The context is you have N nodes each with a fraction of the data and you want to learn a global predictor without exchanging data, having nodes waiting for others, and without fixing the communication topology (which nodes are neighbors).
That's essentially Jérôme's PhD.
We wanted to have an error bound w.r.t. the number of message exchanged, because it gives you an idea of when your predictor is usable.
Turns out, it's tough to get non-asymptotic results, but we got something not that bad for fully connected graphs.
Read 5 tweets
11 Oct
Wondering how to detect when your neural network is about to predict pure non-sense in a safety critical scenario?

We answer your questions in our #ICCV2021 @ICCV_2021 paper!

Thursday 1am (CET) or Friday 6pm (CET), Session 12, ID: 3734

📜 openaccess.thecvf.com/content/ICCV20…

Thread 🧵👇
The problem with DNNs is they are trained on carefully curated datasets that are not representative of the diversity we find in the real world.
That's especially true for road datasets.
In the real world, we have to face "unknown unkowns", ie, unexpected objects with no label.
How to detect such situation?
We propose a combination of 2 principles that lead to very good results:
1_ Disentangle the task (classification, segmentation, ...) from the Out-of-distribution detection.
2_ Train the detector using generated adversarial samples as proxy for OoD.
Read 10 tweets
25 Aug
"torch.manual_seed(3407) is all you need"!
draft 📜: davidpicard.github.io/pdf/lucky_seed…
Sorry for the title. I promise it's not (entirely) just for trolling. It's my little spare time project of this summer to investigate unaccounted randomness in #ComputerVision and #DeepLearning.
🧵👇 1/n
The idea is simple: after years of reviewing deep learning stuff, I am frustrated of never seeing a paragraph that shows how robust the results are w.r.t the randomness (initial weights, batch composition, etc). 2/n
After seeing several videos by @skdh about how experimental physics claims tend to disappear through repetition, I got the idea of gauging the influence of randomness by scanning a large amount of seeds. 3/n
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Thank you for your support!

Follow Us on Twitter!