, 9 tweets, 3 min read Read on Twitter
Thread
1/9 There's recently been much interest in the ML community in training denoisers w/o ground truth. Methods include #noise2noise and #noise2void. Main idea: clean/noisy pairs are unnecessary. Noisy images suffice to train. Let's to consider these works in proper context
2/9 The notion that one can learn denoising without seeing clean/noisy pairs isn't new -- we've done it for decades: any modern denoiser (BM3D, bilateral, NLM) uses *only* the noisy image and has no explicit knowledge of what a clean image should look like.
3/9 A denoiser that computes its kernel (adaptively) based on the noisy image x is often expressible in pseudo-linear form W(x)*x where the rows of W(x) contain the weights. Computing W(x) is equivalent to computing an empirical estimate of a prior on x. ieeexplore.ieee.org/document/63759…
4/9 The roots of this idea are old, and deep. Take the Wiener filter: optimal in L2, based on knowledge of the power-spectrum of the clean signal. Yet we apply the Wiener filter all the time without ground truth. We *estimate* local SNR and form the empirical Wiener filter.
5/9 Why does it work? "Tweedie's formula". Closely related to Stein's, it states that given noisy x = u + noise, the expectation of the posterior does not require explicit knowledge of the prior p(u). It only needs the marginal density of noisy data p(x). jstor.org/stable/23239562
6/9 That is, given a noisy image you can (e.g. non-parametrically) estimate the marginal density of the noisy data (which is the convolution of the prior with the noise density). This estimate will then give you the denoiser -- all the while, no clean data is needed.
7/9 Interesting and weird corollary: larger images (more pixels) are easier to denoise. That's because there's more redundancy, and more "training" data available to estimate the empirical density. epubs.siam.org/doi/abs/10.113…
8/9 Big message here is that *a good denoiser learns the geometry of the manifold of images*. And to do this, it doesn't need clean images. This is part of the reason we proposed Regularization by Denoising (RED) epubs.siam.org/doi/abs/10.113…
9/9 Also worth noting that Alain and Bengio made a very similar observation about denoising auto-encoders. jmlr.csail.mit.edu/papers/volume1…
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Peyman Milanfar
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!