David Picard Profile picture
Oct 11, 2021 10 tweets 6 min read Read on X
Wondering how to detect when your neural network is about to predict pure non-sense in a safety critical scenario?

We answer your questions in our #ICCV2021 @ICCV_2021 paper!

Thursday 1am (CET) or Friday 6pm (CET), Session 12, ID: 3734

📜 openaccess.thecvf.com/content/ICCV20…

Thread 🧵👇
The problem with DNNs is they are trained on carefully curated datasets that are not representative of the diversity we find in the real world.
That's especially true for road datasets.
In the real world, we have to face "unknown unkowns", ie, unexpected objects with no label.
How to detect such situation?
We propose a combination of 2 principles that lead to very good results:
1_ Disentangle the task (classification, segmentation, ...) from the Out-of-distribution detection.
2_ Train the detector using generated adversarial samples as proxy for OoD.
For 1_, we propose an auxiliary network called obsnet solely devoted to predict OoD. It mimics the architecture of the main network with added residual connection from its activation maps in order to *observe* the decision process.
How do we train the obsnet, you ask?
Well simple, we train it to detect when the main network fails.
However, errors are rare because the main network is accurate (obviously you take the best one 😅) and errors are not representative of OoD.
So how do we solve these problem?
We introduce Local Adversarial Attacks (LAA) to trigger failures of the main network.
- We now have as many training samples as required
- We hallucinate OoD-like objects using blind spot of the main network.
It's all local, and it doesn't change the accuracy of the main network.
In practice, we select a random shape and attack the main network's prediction inside it. That's all! 😲
So simple, everybody can successfully code it! 😎
At inference, nothing is changed. LAA is used only during training, so inference time and accuracy are preserved.
We obtain super competitive results both in terms of OoD detection accuracy and in terms of inference time.
We did massive experiments by implementing and testing loads of existing methods on 3 different datasets.
All relevant info below:
📜 openaccess.thecvf.com/content/ICCV20…
🤖 github.com/valeoai/obsnet
📽️
⏰ Session 12 #ICCV2021 Thursday 14/10 at 1 AM (CET) and Friday 15/10 at 6 PM (CET) *ID 3734*

All the hard work by @victorbesnier1 with help from @abursuc and me.
~ FIN ~

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David Picard

David Picard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @david_picard

Mar 26
Folks, I'm furious🤬and need to vent. I have this paper that has been in rejection hell for a year and it needs some love🥰

If you're in retail/image search, it could have major impact on your business.

I'll explain it👇and then rant on bad reviews.
📜: arxiv.org/abs/2306.02928
We introduce a new task "Referred Visual Search" which consists in performing search with an image query and a text specifying the subject of interest in the query.
A use case in fashion would be: you give an image of your favorite celebrity with the text "I want the same shoes". Image
Txt+img queries have existed a long time in the literature, but here, the additional information is intended to focus on a part of the image.
So the goal is to produce a text-conditional embedding, such that you retrieve the correct product among all the stuff in the image.
Read 16 tweets
Nov 23, 2021
I keep seeing hastily analyses of this experiment. Let me put mine among them, because it's not as bad as it looks.

Here are the numbers: 99 papers were both accepted by A and B, 94 were accepted by A but rejected by B and 105 were rejected by A but accepted by B.
But in the real life, B does not exist and only ~200 papers would have made it!

So on average, among the accepted paper (as decided by A), 1/2 (99/(99+94)) got there because they're "universally good", 1/2 (94/(99+94)) because of luck. And about 1/2 (105/(99+94)) were unlucky.
Extend that to the full conference: if we assume 25% acceptance rate, then 13% of all submissions are accepted because they're really good. 13% are accepted because they're lucky, 13% are rejected because they're unlucky and 60% are rejected because they're not good enough.
Read 4 tweets
Nov 22, 2021
Here's is an unusual arxiv of mine: "Non asymptotic bounds in asynchronous sum-weight gossip protocols", arxiv.org/abs/2111.10248
This is a summary of unpublished work with Jérôme Fellus and Stéphane Garnier from way back in 2016 on decentralized machine learning.
1/5
The context is you have N nodes each with a fraction of the data and you want to learn a global predictor without exchanging data, having nodes waiting for others, and without fixing the communication topology (which nodes are neighbors).
That's essentially Jérôme's PhD.
2/5
We wanted to have an error bound w.r.t. the number of message exchanged, because it gives you an idea of when your predictor is usable.
Turns out, it's tough to get non-asymptotic results, but we got something not that bad for fully connected graphs.
3/5
Read 5 tweets
Aug 25, 2021
"torch.manual_seed(3407) is all you need"!
draft 📜: davidpicard.github.io/pdf/lucky_seed…
Sorry for the title. I promise it's not (entirely) just for trolling. It's my little spare time project of this summer to investigate unaccounted randomness in #ComputerVision and #DeepLearning.
🧵👇 1/n
The idea is simple: after years of reviewing deep learning stuff, I am frustrated of never seeing a paragraph that shows how robust the results are w.r.t the randomness (initial weights, batch composition, etc). 2/n
After seeing several videos by @skdh about how experimental physics claims tend to disappear through repetition, I got the idea of gauging the influence of randomness by scanning a large amount of seeds. 3/n
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(