Steve Yedlin Profile picture
Dec 24 17 tweets 5 min read
Fellow film tech nerds, if you can't get enough of the ol' Vitrum Cepa this holiday season, settle in for a LONG #NerdyFilmTechStuff post on the lighting rigs we built for the heightened, theatrical, impressionistic lighthouse effect in @RianJohnson's #GlassOnion.
The effect was not merely a random/offhanded thing that we just set spinning and allowed to hit the set whenever chance determined -- rather, every time it occurs, it's pointedly designed to look a certain way and to happen with precise timing to punctuate the drama of the scene.
We wanted distinct hard edges to the sweep of the light. Even though the edges of the beam of a real lighthouse would be blurry, we wanted very crisp leading and trailing edges...
...so that its motion across the screen would be distinct and the audience would get the visceral sense of the broad sweeping light's progress within the limited view of the rectangle of screen space.
There were also many other attributes that Rian and I wanted from this effect that demanded a much more complex and nuanced solution than just panning a light over the scene. We were after:
- The ability to control in fine increments the speed of the sweep and the duration of the time between sweep-on and sweep-off -- including the ability to adjust these two attributes separately from one another.
- Extremely sharp shadows: not only for the hard leading/trailing edges mentioned above but also to project distinct shadows onto walls and to project ripply glass patterns onto actors/objects.
- Multiple lighthouse rigs had to be controlled together in a single cue with fine tuned control over the offset between rigs. So 2 or 3 rigs would appear to be a single beam: the sweep of one would dovetail seamlessly into the sweep of the next for perceptual continuity.
- Not only fine/precise control, but also reliable repeatability so that cues could be designed with complexity and precision during set-up and then confidently repeated on demand, take after take, at the push of a button during shooting.
- The ability to continuously trigger the cue without winding up cables that would then have to be unwound before the next take.
And so began the fun interdepartmental effort of designing these rigs, each of which was comprised of a large fresnel lamp inside of a a spinning drum with a variable aperture. The drum's rotation was driven by...
...a drive belt connected with a 2-to-1 gear ratio to a motor that could be controlled over DMX (the standard protocol for for the remote electronic control of lighting). The DMX signal was in turn controlled by purpose-built custom software.
It took a lot of specialized engineering and just plain hard work to make these lighthouse rigs what they were. Here is a (non-exhaustive) list of credits:
The drum with adjustable aperture, mounting rig, and drive mechanism designed by Key Grip Pat Daily. Assembly, fabrication, and mainenance by Best Boy Grip Andy Simmons. Welding by Rigging Grip Filip Mlandenovic. Automation expert Scott Fisher helped find/source DMX movers.
Illumination by gaffer Carlos Baker and team. (In most cases, it was a large incandescent fresnel-style lamp with the lens removed to increase sharpness, but in some cases it was an HMI.)

The DMX-controllable motor was a product made by Wahlberg Motion Design in Denmark.
Custom code for the control software was written by myself and by Eric Cameron, long-time close collaborator in coding and in color pipeline development/management. And MadMapper was used for sending the control signal over sACN.
PS: forgot to explicitly mention an obvious but important demand we had for the design of the rig that was a big part of the engineering puzzle. It had to be shrouded so as not to kick light back onto the set when the beam isn't hitting (so when it's "off" it's really off!).

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Steve Yedlin

Steve Yedlin Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @steveyedlin

May 20, 2020
Reposting here a #NerdyFilmTechStuff thread that was buried in replies. It's about how the stage line is a guiding principle, not an inflexible mandate.

It was originally a reply to someone asking if the cut between these two shots breaks the stage line rule.
The stageline is a thumbnail for understanding, not an ironclad law to slavishly follow. One must understand screen direction to know how to break the rule, like a poet has to know grammar before creatively breaking its rules or a cubist painter has to know classical painting.
This scene is a good case in point why understanding the stageline rule is more important than mindlessly following it. The “rule” is actually just a mental shortcut to remember to keep an audience oriented. So they don’t get jolted out of the story...
Read 6 tweets
Jan 18, 2020
A #NerdyFilmTechStuff rant:

While we obsess over counting camera photosites, professional imaging is sinking into the quagmire predicted by Goodheart's Law.
If our intent in demanding more K's (4K, 6K, 8K) is to safeguard resolving power, we're undermining our own intent by evaluating cameras by a metric that’s just a count of circuits across the sensor's surface that ignores their size, precision, technology, sensitivity, quality...
…and even ignores whether the camera is actually recording them (like, if it's brazenly throwing away data in a compression algorithm). And ignores other relevant components like whether the optics in front of the sensor can even resolve the tiny size of individual photosites.
Read 11 tweets
Jan 12, 2020
#NerdyFilmTechStuff thread:

I made a graphic about the color rendering in #KnivesOut, to show how pure photometric data from the camera can be translated for display with more complexity and nuance than is often used with generic methods.

The graphic compares:
1. Uninterpreted scene data from the camera, not prepped for display.

2. Off-the-shelf (manufacturer bundled) transformation to prepare data to be viewed.

3. #KnivesOut color rendering. (Not a shot-specific color “correction” but the core transformation for the whole project.).
Note in the 3D graphs that the off-the-shelf method is more blunt/simple in how it differs from the source data: largely just a uniform rectilinear expansion. Whereas the #KnivesOut method differs from both in more unintuitive, idiosyncratic, nuanced ways:
yedlin.net/KnivesOut_Colo…
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(