Did you ever take a photo & wish you'd zoomed in more or framed better? When this happens, we just crop.
Now there's a better way: Zoom Enhance -a new feature my team just shipped on Pixel. Available in Google Photos under Tools, it enhances both zoomed & un-zoomed images
1/n
Zoom Enhance is our first im-to-im diffusion model designed & optimized to run fully on-device. It allows you to crop or frame the shot you wanted, and enhance it -after capture. The input can be from any device, Pixel or not, old or new. Below are some examples & use cases
2/n
Let's say you've zoomed to the max on your Pixel 8/9 Pro and got your shot; but you wish you could get a little closer. Now you can zoom in more, and enhance.
3/n
A bridge too far to see the details? A simple crop may not give the quality you want. Zoom Enhance can come in handy.
4/n
If you've been to the Louvre you know how hard it is to get close to the most famous painting of all time.
Next time you could shoot with the best optical quality you have (5x in this case), then zoom in after the fact.
5/n
Maybe you're too far away to read a sign and can use a little help from Zoom Enhance
6/n
Like most people, I have lots of nice shots that can be even nicer if I'd framed them better. Rather than just cropping, you can now frame the shot you wanted, after the fact, and without losing out on quality.
7/n
Is the subject small and the field of view large? Zoom Enhance can help to isolate and enhance the region of interest.
8/n
Sometimes there's one or more better shots hiding within the just-average shot you took. Compose your best shot and enhance.
9/n
There's a lot of gems hidden in older, lower quality photos that you can now isolate and enhance. Like this one from some 20 years ago.
10/n
Pictures you get on social media or on the web (or even your own older photos) may not always be high quality/resolution. If they're small enough (~1MP), you can enhance them with or without cropping.
11/12
So Zoom Enhance gives you the freedom to capture the details within your photos, allowing you to highlight specific elements and focus on what matters to you.
It's a 1st step in powerful editing tools for consumer images, harnessing on-device diffusion models.
12/12
Bonus use case worth mentioning:
Using your favorite text-2-image generator you typically get a result ~1 MP resolution (left image is 1280 × 720). If you want higher resolution, you can directly upscale on-device (right, 2048 × 1152) with Zoom Enhance.
13/12
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Years ago when my wife and I we were planning to buy a home, my dad stunned me with a quick mental calculation of loan payments.
I asked him how - he said he'd learned the strange formula for compound interest from his father, who was a merchant in 19th century Iran.
1/4
The origins of the formula my dad knew is a mystery, but I know it has been used in the bazaar's of Iran (and elsewhere) for as long as anyone can remember
It has an advantage: it's very easy to compute on an abacus. The exact compounding formula is much more complicated
2/4
I figured out how the two formulae relate: the historical formula is the Taylor series of the exact formula around r=0.
But the crazy thing is that the old Persian formula goes back 100s (maybe 1000s) of years before Taylor's, having been passed down for generations
How are Kernel Smoothing in statistics, Data-Adaptive Filters in image processing, and Attention in Machine Learning related?
My goal is not to argue who should get credit for what, but to show a progression of closely related ideas over time and across neighboring fields.
1/n
In the beginning there was Kernel Regression - a powerful and flexible way to fit an implicit function point-wise to samples. The classic KR is based on interpolation kernels that are a function of the position (x) of the samples and not on the values (y) of the samples.
2/n
Instead of a fixed smoothing parameter h, we can adjusted it dynamically based on the local density of samples near the point of interest. This enables accounting for variations in the spatial distribution of samples, but doesn't take into account of the values of samples
“On a log-log plot, my grandmother fits on a straight line.”
-Physicist Fritz Houtermans
There's a lot of truth to this. log-log plots are often abused and can be very misleading
1/5
A plot of empirical data can reveal hidden phenomena or scaling. An important and common model is to look for power laws like
p(x) ≃ L(x) xᵃ
where L(x) is slowly varying, so that xᵃ is dominant
Power laws appear all over physics, biology, math, econ. etc., however...
2/5
...just because on a log-log plot your data looks like a line, you can't conclude that you're looking at a power law
In fact, a roughly straight behavior on a log-log scale is like a necessary condition, but it is not sufficient for power-law behavior. Take this example:
Integral geometry is a beautiful topic bridging geometry, probability & statistics
Say you have a curve with any shape, possibly even self-intersecting. How can you measure its length?
This has many applications - curve could be a strand of DNA or a twisted length of wire
1/n
A curve is a collection of tiny segments. Measure each segment & sum. You can go further: make the segments so small they are essentially points, count the red points
A practical way to do this: drop many lines, or a dense grid, intersecting the shape & count intersections
2/n
Curve's length is the sum of intersections n(ρ,θ) of all lines (in polar coords) with the curve (counting multiplicities). This is the beautiful Crofton formula:
Length = 1/2 ∫∫ n(ψ,p) dψ dp
The 1/2 is there because oriented lines are a double cover of un-oriented lines
Smoothing splines fit function to data as the sol'n of a regularized least-squares optimization problem.
But it’s also possible to do it in one shot with an unusually shaped kernel (see figure)
Is it possible to solve other optimization problems this way? Surprisingly yes
1/n
This is just one instance of how one can “kernelize” an optimization problem. That is, approximate the solution of an optimization problem in just one-step by constructing and applying a kernel once to the input
Given some conditions you can it do much more generally
2/n
If you specialize the regularization to be of the form
φ(x) = ρ( ||Ax|| ) where A= R(|i-j|) is a stationary & isotropic, this gives tidy conversions between φ(x) and the kernel K(x).
Mean-shift iteratively moves points towards regions of higher density. It does so by placing a kernel at each data point, calculating the mean of the data points within that window, shifting points towards this mean until convergence: Look familiar?
1/n (Animation @gabrielpeyre)
The first term on the right hand side of the ODE has the form of a pseudo-linear denoiser f(x) = W(x) x. A weighted average of the points where the weights depend on the data. The overall mean-shift process is a lot like a residual flow:
d/dt x(t) = f(x(t)) - x(t)
2/n
Residual on the RHS is an approximation of the “score” -the gradient of the empirical density of x making it a gradient flow
d/dt x(t) ≈ ∇ log p̂(x(t))
So mean-shift a) estimates the empirical density & b) flows points to nearby peaks. Similarly to flow-matching & InDI
3/n