📣New tutorial on how to use #Aydin — our easy-to-use and performant image #denoiser. We use one of our favorite test images: 'New York'. We go through different algorithms included in #Aydin and show how to use them, and how to set their parameters:
The question is how well can we denoise this image in the absence of any prior knowledge, ground-truth, or any other training images? Below is a crop with and without noise. Notice that the original image has lost of details: regular grids for windows, roof textures, etc...
2/n
We go through several algorithms, some only accessible through the new 'advanced mode'. If you like to tune parameters, be careful for what you wish for... We have a LOT of parameters in advanced mode...
3/n
Below are crops of the noisy image denoised with: dct denoising, learned dictionary denoising, 'spectral' denoising, tv-denoising. All are OK, but not particularly impressive. Lots of artefacts and a lack of (accurate) detail.
4/n
The denoisers used above are 'classics' and thus predate the 'deep learning' wave, but are augmented with Noise2Self automatic parameter tuning. CNN based denoisers do not fare much better (see below), they do denoise but over-blur and fail with details.
5/n
Which is OK if your images are 'band-limited' i.e. lack detail because of a relatively extent system point spread function, but less so if your actually have a lot of details in your images that you want to retain.
Enter our Noise2Self-FGR-* family of denoisers!
6/n
Above you can see: ground truth, noisy, and denoised with n2s-fgr-cb . Pay attention to the details, for example the white pyramidal roof has a a faint periodic structure that is barely noticeable in the noisy image but that is well recovered.
7/n
Also, if you look carefully at some of the uniform regions in the image (below left), e.g. the horizon at the top, there are very few artefacts and no hallucinations -- that's hard to do! compare with CNN based denoising (n2s-unet, below right).
8/n
You can get the standalone, easy to install #Aydin Studio app (also called 'bundle') here:
We wondered, can we use #DeepLearning to map the landscape of protein sub-cellular localization?
2/n
The problem with #images , compared to #sequences, is that it is unclear how to compare them. For example, how to estimate localization similarity from pairs of images of fluorescently labeled cells? With sequences we have algorithms and tools. But for images?
So why should you care? Well if you already care about adjusting the brightness and contrast of your images, you should care about gamma correction, specially because it partly happens without your knowledge and often without your control. (2/n)
First I would like to point out that I am not talking about gamma correction in the context of image analysis or quantification. I am talking about gamma correction as it pertains to how your images are reproduced on screen or on paper (3/n)