Today we announced a new feature on Pixel 7/Pro and @GooglePhotos called "Unblur". It's the culmination of a year of intense work by our amazing teams. Here's a short thread about it
Last yr we brought two new editor functions to Google Photos: Denoise & Sharpen. These could improve the quality of most images that are mildly degraded. With Photo Unblur we raise the stakes in 2 ways:
First, we address blur & noise together w/ a single touch of a button.
2/n
Second, we're addressing much more challenging scenarios where degradations are not so mild. For any photo, new or old, captured on any camera, Photo Unblur identifies and removes significant motion blur, noise, compression artifacts, and mild out-of-focus blur.
3/n
Photo Unblur works to improve the quality of the 𝘄𝗵𝗼𝗹𝗲 photo. And if faces are present in the photo, we make additional, more specific, improvements to faces on top of the whole-image enhancement.
4/n
One of the fun things about Photo Unblur is that you can go back to your older pictures that may have been captured on legacy cameras or older mobile devices, or even scanned from film, and bring them back to life.
5/n
It's also fun to go way back in time (like the 70s and 80s !) and enhance some iconic images like these photos of pioneering computer scientist Margaret Hamilton, and basketball legend Bill Russell.
6/n
Recovery from blur & noise is a complex & long-standing problem in computational imaging. With Photo Unblur, we're bringing a practical, easy-to-use solution to a challenging technical problem; right to the palm of your hand
w/ @2ptmvd@navinsarmaphoto@sebarod & many others
n/n
Bonus: Once you have a picture enhanced with #PhotoUnblur, applying other effects on top can have an even more dramatic effect. For instance, here I've also blurred the background and tweaked some color and contrast.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Images aren’t arbitrary collections of pixels -they have complicated structure, even small ones. That’s why it’s hard to generate images well. Let me give you an idea:
3×3 gray images represented as points in ℝ⁹ lie approximately on a 2-D manifold: the Klein bottle!
1/3
Images can be thought of as vectors in high-dim. It’s been long hypothesized that images live on low-dim manifolds (hence manifold learning). It’s a reasonable assumption: images of the world are not arbitrary. The low-dim structure arises due to physical constraints & laws
2/3
But this doesn’t mean the “low-dimensional” manifold has a simple or intuitive structure -even for tiny images. This classic paper by Gunnar Carlsson gives a lovely overview of the structure of data generally (and images in particular). Worthwhile reading.
We often assume bigger generative models are better. But when practical image generation is limited by compute budget is this still true? Answer is no
By looking at latent diffusion models across different scales our paper sheds light on the quality vs model size tradeoffs
1/5
We trained a range of txt-2-image LDMs & observed a notable trend: when constrained by compute budget smaller models frequently outperform their larger siblings in image quality. For example the sampling result of a 223M model can be better than results of a model 4x larger
2/5
Smaller models may never reach quality levels that large models can. Yet when operating under an inference budget, points reachable by both models may be reached more efficiently w/ smaller ones. We study the tradeoff between model size, compute, quality, & downstream tasks
It’s been >20 years since I published my first work on multi-frame super-res (SR) w/ Nhat Nguyen and the late great Gene Golub. Here’s my personal story of SR as I’ve experienced it from theory, to practical algorithms, to deployment in product. In a way it’s been my life’s work
Tsai and Huang (1984) were the first to publish the concept of multi-frame super-resolution. Key idea was that a high resolution image is related to its shifted and low-resolution versions in the frequency domain through the shift and aliasing properties of the Fourier transform
This setup assumed no noise, global translation, and a trivial point sampling process: the sensor blurring effect was ignored. But even with this simple model, the difficulty is clear. We have two entangled unknowns: motion vectors and high res image. A bit more realist model is
Motion blur is often misunderstood, because people think of it in terms of a single imperfect image captured at some instance in time.
But motion blur is in fact an inherently temporal phenomenon. It is a temporal convolution of pixels (at the same location) across time.
1/4
Integration across time (eg open shutter) gives motion blur w/ strength depending on the speed of objects
A mix of object speed, shutter speed and frame rate together can cause aliasing in time (spokes moving backwards) & blur in space (wheel surface) all in the same image
2/4
In a video at shutter speed too low to avoid motion blur, but w/ frame rate high enough to avoid temporal aliasing, you can in fact remove motion blur just by deconvolution *in time* with a single 1D point "time" spread function. No segmentation, no motion estimation needed
This is not a scene from Inception. The sorcery is a real photo was taken with a very long focal length lens. When the focal length is long, the field of view becomes very small and the resulting image appears more flat.
1/4
Here's another example:
The Empire State building and the Statue of Liberty are about 4.5 miles apart, and the building is 5x taller.
2/4
Here's a nice visualizations of how focal length relates to the (angular) field of view.
What is resolution in an image? It is not the number of pixels. Here’s the classical Rayleigh’s criterion taught in basic physics:
1/5
This concept is important in imaging because it guides how densely we should pack pixels together to avoid or allow aliasing. (Yes, sometimes aliasing is useful!)
2/5
But Rayleigh's criterion is just a rule of thumb - not a physical law. It says we can’t eyeball two sources if they're too close. But this doesn't mean we can't *detect* 1 vs 2 or more sources even in the presence of noise. With proper statistical tests, we absolutely can.