David Picard Profile picture
Computer Vision/Machine Learning research @ImagineEnpc / LIGM , École des Ponts. Music & overall happiness. A few flowers too. Born well below 350ppm.
Mar 26 16 tweets 5 min read
Folks, I'm furious🤬and need to vent. I have this paper that has been in rejection hell for a year and it needs some love🥰

If you're in retail/image search, it could have major impact on your business.

I'll explain it👇and then rant on bad reviews.
📜: arxiv.org/abs/2306.02928 We introduce a new task "Referred Visual Search" which consists in performing search with an image query and a text specifying the subject of interest in the query.
A use case in fashion would be: you give an image of your favorite celebrity with the text "I want the same shoes". Image
Nov 23, 2021 4 tweets 1 min read
I keep seeing hastily analyses of this experiment. Let me put mine among them, because it's not as bad as it looks.

Here are the numbers: 99 papers were both accepted by A and B, 94 were accepted by A but rejected by B and 105 were rejected by A but accepted by B. But in the real life, B does not exist and only ~200 papers would have made it!

So on average, among the accepted paper (as decided by A), 1/2 (99/(99+94)) got there because they're "universally good", 1/2 (94/(99+94)) because of luck. And about 1/2 (105/(99+94)) were unlucky.
Nov 22, 2021 5 tweets 2 min read
Here's is an unusual arxiv of mine: "Non asymptotic bounds in asynchronous sum-weight gossip protocols", arxiv.org/abs/2111.10248
This is a summary of unpublished work with Jérôme Fellus and Stéphane Garnier from way back in 2016 on decentralized machine learning.
1/5
The context is you have N nodes each with a fraction of the data and you want to learn a global predictor without exchanging data, having nodes waiting for others, and without fixing the communication topology (which nodes are neighbors).
That's essentially Jérôme's PhD.
2/5
Oct 11, 2021 10 tweets 6 min read
Wondering how to detect when your neural network is about to predict pure non-sense in a safety critical scenario?

We answer your questions in our #ICCV2021 @ICCV_2021 paper!

Thursday 1am (CET) or Friday 6pm (CET), Session 12, ID: 3734

📜 openaccess.thecvf.com/content/ICCV20…

Thread 🧵👇 The problem with DNNs is they are trained on carefully curated datasets that are not representative of the diversity we find in the real world.
That's especially true for road datasets.
In the real world, we have to face "unknown unkowns", ie, unexpected objects with no label.
Aug 25, 2021 11 tweets 4 min read
"torch.manual_seed(3407) is all you need"!
draft 📜: davidpicard.github.io/pdf/lucky_seed…
Sorry for the title. I promise it's not (entirely) just for trolling. It's my little spare time project of this summer to investigate unaccounted randomness in #ComputerVision and #DeepLearning.
🧵👇 1/n The idea is simple: after years of reviewing deep learning stuff, I am frustrated of never seeing a paragraph that shows how robust the results are w.r.t the randomness (initial weights, batch composition, etc). 2/n