, 10 tweets, 3 min read
My Authors
Read all threads
#NerdyFilmTechStuff thread:

I made a graphic about the color rendering in #KnivesOut, to show how pure photometric data from the camera can be translated for display with more complexity and nuance than is often used with generic methods.

The graphic compares:
1. Uninterpreted scene data from the camera, not prepped for display.

2. Off-the-shelf (manufacturer bundled) transformation to prepare data to be viewed.

3. #KnivesOut color rendering. (Not a shot-specific color “correction” but the core transformation for the whole project.).
Note in the 3D graphs that the off-the-shelf method is more blunt/simple in how it differs from the source data: largely just a uniform rectilinear expansion. Whereas the #KnivesOut method differs from both in more unintuitive, idiosyncratic, nuanced ways:
yedlin.net/KnivesOut_Colo…
And here it is again embedded in the thread instead of as a link.
#NerdyFilmTechStuff
To clarify base on some questions I'm getting: The "uninterpreted" imagery is the literal RGB triplets straight from the camera. They don't look right and *shouldn't* look right because pure photometric data needs to be interpreted/prepped before viewing...
A pro camera delivers the data in a form that has an unambiguous relationship to the scene that was captured (that's why it's a true photometric record of the scene) but has a very ambiguous relationship to the final image...
We can prepare that data for the display's colorspace any way we like. And the graphic demonstrates that *how* we do that prep can be a huge leverage point in our photographic look: for example, it may be stereotypically simple or it may be complex, idiosyncratic and nuanced...
There's no color grading (also called color correction or timing) in the graphic. Grading is subjective tweaking shot-by-shot. This is the core computation for prepping uninterpreted photometric data to be viewed -- the same one that's used for every shot in the movie...
You can't properly start color grading till you can view the data as an image. This is an under-the-hood look at that low-level first step (even before color grading): authoring the conversion from pure scene-referred data to a rendered photographic look for a viewing colorspace.
And to clarify the graphs themselves: they represent the RGB colors as 3D positions in space by interpreting the three channel values (red, green, blue) as x, y, z positions in a 3D coordinate system. So, the graphs are direct representations of the literal pixel code values.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Steve Yedlin

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!