jörn jacobsen Profile picture
pushing representation learning boundaries at 🍏 prev: @VectorInst/@UofT, @bethgelab, @UvA_Amsterdam, @maxplanckpress
Mar 25, 2019 6 tweets 3 min read
1/6 Deep classifiers seem to be extremely invariant to *task-relevant* changes. We can change the content of any ImageNet image, without changing model predictions over the 1000 classes at all. Blog post @ medium.com/@j.jacobsen/de…. with @JensBehrmann Rich Zemel @MatthiasBethge 2/6 To show this, we design an invertible classifier with a simplified read-out structure. This allows us to combine logits (Zs here) of one image with everything the classifier does not look at (Zn here) from another image, invert and inspect the result.