Color For VFX (PDFDrive)
Color For VFX (PDFDrive)
Color For VFX (PDFDrive)
Chris Healer
CEO / CTO / VFX Supervisor
[email protected]
Converting recorded image Scaling incoming or Color transformations stored Where we put material
data into gamma-corrected outgoing images to increase as a file or equation. These between stages of post
material for display or (stored) dynamic range, fit serve both technical and production. This creates
processing. broadcast standards, or to creative purposes. many of the obstacles we
work with certain LUTs. have to overcome.
Foundation
Between Cup and Lip
What is color?
The fundamental particle that transmits light is a Photon. Photons are both
waves and particles, and different combinations of frequencies (waves) and
intensities (quantities of particles) of photons hitting our eyes produce the
effect we think of as color.
How do we see it?
By the time the sixteenth candle is lit you might assume that the room would
appear to our eyes to be sixteen times as bright as it did with one candle lit. But
because we see in logarithmic values, the room actually only seems four times
as bright as it did when the first candle was lit.
We only care about apparent changes
To affect the same apparent change, we need twice as much light:
And this is where Log and Lin come in...
You’ve probably heard these terms before, so let’s look at them for moment.
But cameras, depending on the settings, may use the additional headroom to
gain additional stops of dynamic range. The camera will scale but not crop the
data before recording, and some highlight information can be gained that way.
Frustratingly, we don’t know if footage is scaled until we start to work with it.
LUTs
What is a LUT?
A lookup table is a general purpose math concept. For our purposes, the data
being mapped is each separate channel of each pixel of each frame.
Often an equation to represent a curve (like a CDL) can be used to the same
effect, but without actually using a “table”. This can still be called a LUT.
Types of LUTS
Generally speaking, there are only two kinds of LUTs: 1D LUTs and 3D LUTs.
A Luminance Curve is the special case of a 1D LUT with all matching curves.
Any color operation in any software can be represented with one of these three
transformations.
A 3D LUT is more complicated, and it maps each channel against the other
channels, and can’t really be thought of as a curve, but more of a mapping of
one 3D space to another. 3D LUTs allow for crosstalk between the channels
and much more control, including saturation and channel swapping.
Linearization is done with a luminance curve (LUT)
Storage
In an ideal world...
Or better:
and
You’ll need a Display LUT for your display device, probably sRGB (computer
monitor) or Rec709 (broadcast monitor).
The filmmakers or DI facility may have a ‘look’ for a particular shot or sequence,
and you will want to view output with the look applied.
This is most likely a 3D LUT, and will appear as a .cube or .3dl or .cms file.
The Look LUT should be applied before the Display LUT, assuming the
Display LUT is not concatenated into it.
What is LUT concatenation?
Color transforms compound on top of each other, which is pretty apparent as
we create chains of them to view footage. What may not be apparent is that
chains of transforms can be concatenated, or combined, into one transform.
If possible, we want to invert (or unwrap) the transformations from Linear Space
back to the original color space if we intend to deliver the footage to a colorist
or client.
To do this we need to apply inverse transforms in the reverse order that they
were applied.
We may also sometimes see this called a backward (vs. forward) transform.
Are all LUTs reversible?
In short, no. 1D LUTs and Luminance Curves are generally invertible. But
very few 3D LUTs are mathematically invertible, so applications don’t include
an option to invert a 3D LUT.
The Luminance curve they choose is based on many factors, so suffice to say
that their curve is the product of lots of testing in different lighting and camera
conditions. It is the camera manufacturer’s “special sauce”.
Producers and Directors feel more comfortable if they see a good looking
image while on set, so a color correction is often applied to the preview image.
This color correct is recorded in a CDL (Color Decision List). The correction
may be purely creative (make skin tones nicer), or may be something technical
to compensate for color temperature or poor exposure.
The CDL is applied to the RAW data before the Camera LUT is applied.
Leaving the shoot
We want to leave the shoot with everything we need to linearize the footage.
Namely, CDL data, knowledge of which camera was used, and hopefully
whether the camera was recording legal or extended range.
A frequent ‘Gotcha’...
The camera may be sending a Legal (not Extended) signal to the DIT.
The DIT has the option to scale in the CDL, or to scale the signal (in his
software) before creating the CDL.
The opposite case is also true, where the CDL is doing video scaling (by
modifying slope and offset), and the video is already in Extended range.
More on Storage
Video data is really big!
Like, really big.
By using fewer bits, we have less to read and write from disk, but at the
expense of having less color information, or fewer tones available.
10-bit 2 Integer All integer values between 0-1023 (note that some formats pack as 1.25 bytes per pixel)
16-bit (half float) 2 Float Certain Floats between +/- 65504 with the most fractional resolution dedicated on 0-1
32-bit 4 Float Certain Floats between +/- 3.4 × 1038 with the most fractional resolution dedicated on 0-1
What those ranges mean
When we say that an 8-bit integer value goes from 0-255, we really mean that
there are 256 shades of grey available between black and white. We map
these to a range of 0.0-1.0 make things uniform and to work in floating point.
Remember, this is happening for Red, Green, and Blue at the same time.
Floating point values
Why do we call it floating point?
Because the decimal place can “float” within the significant digits specified!
The bits stored in a floating point value represent (in a mathematical way) both
the digits to return and the position of the decimal place.
1.23456789 Ok
Interestingly, the effect of this is that there is as -12.3456789 Ok
much resolution dedicated to the values between
123456.789 Ok
-1 and +1 as there is with values less than -1 or
greater than +1. .000123456789 Ok
● Source Footage
● Stock Footage
● CG-rendered Images
● A Still Camera
● HDRI
● Scanned Film
● etc.
You’ll want to work in linear space and apply a display LUT that corresponds to
the display device you’re working on.
But a projector may want a custom display LUT to compensate for an off-white
screen or the color temperature of a bulb.
It may not be obvious that in Nuke, the colorspace chosen for a Read node and
a Write node are opposites, which follows the convention of unwrapping
transforms before writing to disk, even though in the nuke interface, they have
the same name.
● Milky
● Contrasty
● Overexposed
● Blacks just never seem rich and dark
The CDL should go before the Camera LUT because we don’t know what’s
coming afterward. For instance, we may be loading a file for VFX purposes,
where we use the chain of transforms shown before.
Putting the CDL directly after the RAW data is the only way to ensure that it will
flow downstream regardless of what the application is, without being baked into
the RAW stream itself.
How does a CDL work?
It’s similar to the difference between ⅓ (one third) and 33%, or 33.3% or
33.3333%. One third can’t be represented as a decimal value.
It’s easier to visualize in 8-bit space, where for example if we have a red value
of 127 (~.5 in float) and we darken it by 50%, we will have a value of 63.5,
which as an integer rounds to 64. If we brighten it by 200% (to counteract the
darken), we get a new value of 128. The round-off error (from 127-128) is a
nuisance in this case, but if it’s compounded many times over, things can get
banded, clipped, etc. -- it can get messy!
Comparing Camera LUTs
AlexaV2LogC vs AlexaV3LogC: V3 came
out after V2, and answers to updates in
Alexa’s firmware.
You may have to try both of these to see which one works,
based on the firmware of the camera it was shot on.
Comparing display LUTs
A display lut is used to present linear material
to the screen. Unlike Camera LUTs, which are
designed to contain High-dynamic values, display
LUTs will usually map 0 to 0 and 1 to 1, with a
gamma response curve in between.
In 1996, sRGB was intended to be a standard for computer screens and for the
internet, while Rec709 was established as a broadcast standard in 1990.
“Color Space” Cheat Sheet
Name Uses Is a Color Model Luminance Curve
This is different from a subtractive color model like CMYK, where the
assumption is that we’re starting with white (a sheet of white paper, for
example), and colors mask white down to a final color.
Printing on paper, dying fabrics, projecting film, etc. are examples of subtractive
color.
Alexa LUT Generator (here)
Where Things Are Going
Technology that is on its way
ACES
The Academy Color Encoding System has been under development for many
years, and is starting to appear in new versions of softwares like Nuke and
Flame.
It’s goal is to standardize all steps of the color transformation process, including
cameras, storage, playback, monitors, etc.