Profile picture
, 14 tweets, 3 min read Read on Twitter
Huawei’s P30 Pro probably has the best cell camera on the global market right now. They’re doing magnificent work, with Sony sensors. Much respect.

There is literally concern about Huawei’s moon shots. Basically their AI has learned what the moon (mostly unchanging) looks like.
So when you’re in moon mode, and you try to take a photo of the moon, it (supposedly — let’s assume this is true for the sake of this thread) combines incoming signal with the learned constraints of the moon to generate an accurate image without physical demands (tripods etc).
Is this “cheating”?

It’s extraordinarily easy to say yes, of course it is, those aren’t the values that came off the Sony sensor, that’s just fitting a desired outcome over a noisy signal.

What’s interesting is how deep that rabbit hole goes.
All photography involves filtering. I mean, color involves selecting some photons and rejecting others. (Yeah. I know.) All digital photography involves correcting for a miasma of factors. RAW isn’t just big because it’s uncompressed. There’s more data there, undisneyfied.
There has long been this field of computational photography, that involves reprocessing of one or many possibly RAW datasets to develop better photos. I play in this space every so often — Fiji.sc just gets better and more fun every year.
The field is having a moment because there is much more computational power available at time of initial capture. Forget reprocessing, now we just have initial processing. We don’t merge 30 photos after the fact, we just take 30 and there’s your photo. Night vision.
A lot of actually working AI comes down to: “Here’s a bunch of noisy signal. Here’s ground truth. Find a system that turns this noise into that truth.” 30 photos from a shaky hand, one photo long exposure on a tripod. Learn to convert former into latter. We do this at scale now.
This is all good, if you know what you’re mapping to what. “This mode, moon mode, shows you what the moon would look like if you had a tripod and long exposure, just with your shaky hand”

You can close your eyes and imagine. The code can match various constraints and synthesize.
A legitimate fear is when this process isn’t done just on features of a moon, but features of a face. Because we threw cameras everywhere thinking they would stop crime, and now we just see blurry blobs behaving badly.

Maybe someone hits Enhance. And now it’s a real face...
And we become so sure, too sure that face is that person is that criminal is that prisoner to be. It was just an algorithm programmed to imagine the least wrong thing. But we are pushing the logic further and further out. We don’t always know what it’s sending back.
Computational photography and machine learning are lightning in a bottle, both some of the most transformative technologies I’ve played with, ever.

But you do need to know what the machine is telling you. Automated imagination is still but an informed guess.
We are specifically not prepared for ML with semantically meaningful noise. We assume when a computer doesn’t know, it will lie poorly.

That is not necessarily the case. Moon mode works. If you think it’s cheating, ok, but its lie is very difficult to distinguish from truth.
Truth is some people want to take a great photo of the moon and they’re walking away with exactly what they wanted, and that’s truly what the “unenhanced” photo would have contained. That “guess” was no risk, high reward.

We will have to learn to encode that. That’s “all”.
tl,dr: Deepfakes on surveillance video will emerge organically if we don’t recognize them as an obvious failure mode. Moon mode is just the harmless expression of semantically meaningful noise, a risk underlying this entire space.

Data lies and wants you to be wrong.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Dan Kaminsky
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!