, 6 tweets, 3 min read Read on Twitter
1/6 Deep classifiers seem to be extremely invariant to *task-relevant* changes. We can change the content of any ImageNet image, without changing model predictions over the 1000 classes at all. Blog post @ medium.com/@j.jacobsen/de…. with @JensBehrmann Rich Zemel @MatthiasBethge
2/6 To show this, we design an invertible classifier with a simplified read-out structure. This allows us to combine logits (Zs here) of one image with everything the classifier does not look at (Zn here) from another image, invert and inspect the result.
3/6 We have stumbled upon what may be the first analytical adversarial attack. Our approach allows to arbitrarily change image content without changing the logit outputs at all. Middle row shows images with logits of top-row, but content of bottom-row.
4/6 We call the phenomenon invariance-based adversarial examples, a complementary viewpoint to the classical perturbation-based case. We ask: which task-relevant directions is my classifier invariant to? Instead of: which task-irrelevant directions is my classifier sensitive to?
5/6 An information-theoretic analysis reveals that cross-entropy is (in part) responsible for this, as it does not discourage such invariance. We extend the objective with an independence term to explicitly allow to control invariance. This fixes the problem in various settings.
6/6 If you are interested in the details, check out the full paper. It is going to be presented @iclr2019: arxiv.org/abs/1811.00401

Much work to be done to better understand the role of excessive invariance for generalization and adversarial vulnerability!
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to jörn jacobsen
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!