Today we announced a new feature on Pixel 7/Pro and @GooglePhotos called "Unblur". It's the culmination of a year of intense work by our amazing teams. Here's a short thread about it
Last yr we brought two new editor functions to Google Photos: Denoise & Sharpen. These could improve the quality of most images that are mildly degraded. With Photo Unblur we raise the stakes in 2 ways:
First, we address blur & noise together w/ a single touch of a button.
2/n
Second, we're addressing much more challenging scenarios where degradations are not so mild. For any photo, new or old, captured on any camera, Photo Unblur identifies and removes significant motion blur, noise, compression artifacts, and mild out-of-focus blur.
3/n
Photo Unblur works to improve the quality of the 𝘄𝗵𝗼𝗹𝗲 photo. And if faces are present in the photo, we make additional, more specific, improvements to faces on top of the whole-image enhancement.
4/n
One of the fun things about Photo Unblur is that you can go back to your older pictures that may have been captured on legacy cameras or older mobile devices, or even scanned from film, and bring them back to life.
5/n
It's also fun to go way back in time (like the 70s and 80s !) and enhance some iconic images like these photos of pioneering computer scientist Margaret Hamilton, and basketball legend Bill Russell.
6/n
Recovery from blur & noise is a complex & long-standing problem in computational imaging. With Photo Unblur, we're bringing a practical, easy-to-use solution to a challenging technical problem; right to the palm of your hand
w/ @2ptmvd@navinsarmaphoto@sebarod & many others
n/n
Bonus: Once you have a picture enhanced with #PhotoUnblur, applying other effects on top can have an even more dramatic effect. For instance, here I've also blurred the background and tweaked some color and contrast.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Integral geometry is a beautiful topic bridging geometry, probability & statistics
Say you have a curve with any shape, possibly even self-intersecting. How can you measure its length?
This has many applications - curve could be a strand of DNA or a twisted length of wire
1/n
A curve is a collection of tiny segments. Measure each segment & sum. You can go further: make the segments so small they are essentially points, count the red points
A practical way to do this: drop many lines, or a dense grid, intersecting the shape & count intersections
2/n
Curve's length is the sum of intersections n(ρ,θ) of all lines (in polar coords) with the curve (counting multiplicities). This is the beautiful Crofton formula:
Length = 1/2 ∫∫ n(ψ,p) dψ dp
The 1/2 is there because oriented lines are a double cover of un-oriented lines
Smoothing splines fit function to data as the sol'n of a regularized least-squares optimization problem.
But it’s also possible to do it in one shot with an unusually shaped kernel (see figure)
Is it possible to solve other optimization problems this way? Surprisingly yes
1/n
This is just one instance of how one can “kernelize” an optimization problem. That is, approximate the solution of an optimization problem in just one-step by constructing and applying a kernel once to the input
Given some conditions you can it do much more generally
2/n
If you specialize the regularization to be of the form
φ(x) = ρ( ||Ax|| ) where A= R(|i-j|) is a stationary & isotropic, this gives tidy conversions between φ(x) and the kernel K(x).
Mean-shift iteratively moves points towards regions of higher density. It does so by placing a kernel at each data point, calculating the mean of the data points within that window, shifting points towards this mean until convergence: Look familiar?
1/n (Animation @gabrielpeyre)
The first term on the right hand side of the ODE has the form of a pseudo-linear denoiser f(x) = W(x) x. A weighted average of the points where the weights depend on the data. The overall mean-shift process is a lot like a residual flow:
d/dt x(t) = f(x(t)) - x(t)
2/n
Residual on the RHS is an approximation of the “score” -the gradient of the empirical density of x making it a gradient flow
d/dt x(t) ≈ ∇ log p̂(x(t))
So mean-shift a) estimates the empirical density & b) flows points to nearby peaks. Similarly to flow-matching & InDI
3/n
Random matrices are very important in modern statistics and machine learning, not to mention physics
A model about which much less is known is uniformly sampled matrices from the set of doubly stochastic matrices: Uniformly Distributed Stochastic Matrices
A thread -
1/n
First, what are doubly stochastic matrices?
Non-negative matrices whose row & column sums=1.
The set of doubly stochastic matrices is also known as the Birkhoff polytope: an (n−1)² dimensional convex polytope in ℝⁿˣⁿ with extreme points being permutation matrices.
2/n
The extreme points of the Birkhoff polytope (permutations) are sparse matrices, but a typical matrix sampled from inside the polytope is by contrast, very dense
Since rows and columns are exchangeable, the entries of a sampled matrix have the same marginal distribution.
can teach a lot about some complex ideas in modern machine learning including overfitting & double-descent.
Let's assume A is n-by-p. So we have n data points and p parameters
1/10
If n ≥ p (“under-fitting” or “over-determined" case) the solution is
x̃ = (AᵀA)⁻¹ Aᵀ y
But if n < p (“over-fitting” or “under-determined” case), there are infinitely many solutions that give *zero* training error. We pick min‖x‖² norm solution:
x̃ = Aᵀ(AAᵀ)⁻¹ y
2/10
In either case, the solution can be compactly written in terms of the SVD of A:
A = USVᵀ
where U & V are orthogonal matrices of size nxn & pxp, and S is nxp & contains i = 1 to k nonzero diag elements