{{LEFT}}

Lightcraft Shader Library 2014

The 2014 shader library takes a new approach to CgFX shaders. It’s designed as a starting point for people who understand building and lighting CG scenes, but may not be familiar with CgFX shaders and real-time lighting. It includes shaders for matte and glossy surfaces with opacity control for glass, and effects shaders for video and smoke “cards.”

It is designed to:

  • Produce realistic looks while not limiting artistic choice.
  • Be versatile but easily controlled.
  • Have understandable workings.

It’s also a gateway to writing your own shaders. The shader programs are documented internally and copious reference material about writing shaders is available online.

Zones in Virtual Sets

For many CG scenes, virtual sets are built in zones based on their proximity to camera. The zones go from infinitely far away (sky/cloud environment sphere) where parallax is absent and there’s no interaction with lights on the set or talent. Down to the immediate vicinity of the talent, where parallax and lighting interaction are crucial. Each zone works best with different kinds of CG lighting techniques.

Shader library 2014 DocsZones in Virtual Sets

Summary:

  • Long distance zone – Elements about 1 kilometer in distance to infinity, (eg. the background cyclorama). No parallax evident when camera moves, not affected by set lighting.
  • Medium distance zone – 100 meters, (city block). Minimal parallax is evident if the camera makes a big move, only facing sides of buildings or set elements will be constructed. Little to no set lighting effect.
  • Nearby zone – 10 meters, significant parallax. Most CG elements fully built, CG and on-set lighting needs to match for a sense of reality.
  • Interactive zone – 1 meter. Items in contact with the talent, or on which they cast shadows. Typically items in this zone will be built physically.

For indoor scenes the zone scale may be compressed if the stage is small or camera working area is narrow (limiting parallax effects).

For instance, the camera is in a narrow ground floor apartment where cars and pedestrians are passing outside a window. In this case the street view outside the window is Long Zone, while elements at the borders of the room such as railings and an awning just outside the window are in the “Medium Zone”. A CG ceiling in the room would be in the “Nearby Zone,” since it will need to match set lighting to appear integrated. If there were CG elements close to the actors in the room these would be in the “Interactive Zone.”

Lighting and Zones

The zone in which a CG element lives determines its lighting needs.

The Long Zone is the environment bubble backdrop, often a sphere.

  • This represents the limits of what we can see. There are no CG elements in this zone except the backdrop itself. Like a cyclorama, the bubble is self lit and does not respond to lighting from stage. In Previzion this bubble is often made with 4k movie textures, so that distant lights can glitter, aircraft can move across the sky or pedestrians pass by a window.

The Medium Zone exists at around 100 meters from the camera.

  • Typically only the facing sides of the buildings would be constructed. If the buildings are made of reflective glass these might use a reflection map so that the mirrored view changes slightly as the camera moves.
  • Buildings in the Medium Zone will often have “baked-in” lighting, in particular if they were reconstructed from photos.

The Nearby Zone exists at about 10 meters and closer.

  • This is the on-set area where matching set lighting to virtual lighting becomes crucial to creating an integrated look. Light maps and the dynamic point light are useful tools for this. Ambient light from a lighting bubble shot on set helps as well.

The Interactive Zone concerns itself more with delicate adjustments of the keying system, to pick up shadows and spotlights cast on green surfaces. Any “green” surfaces within 1 meter of actors or props, including floors, would be considered part of this zone.

Shaders and Surfaces

Each shader in the library has a base color for the surface, a “bumpiness” and (in some) their transparency. These surface qualities are revealed by how the shaded object responds to light.

So, a shader’s surface qualities may include:

  1. Surface color (whether opaque or glassy)
  2. Transparency
  3. Bumpiness
  4. Reflectivity / refraction
  5. Light response

Base or Surface Color

The base or surface color of an object is “paint on the polygons”. It can be created several ways, but the shader library focuses on using textures or images.

image06Left; Diffuse with no light : Right; Diffuse with light

On the left is a sphere showing its base color texture without lighting. The texture colors are transferred directly to the screen. For contrast, on the right is the same sphere, but with lighting effects created by a light-map from a spotlight, plus both diffuse and specular highlights created by a Blinn-Phong point light.

Note that so far (on the left) we have a perfectly smooth surface, and that without lighting you wouldn’t see the base color.

Transparency and Alpha Maps

There are numerous kinds of transparency, but we’ll focus on one here: transparency used as a way to indicate where a surface ends, like the edge of a leaf.

Ragged, natural edges are represented with an alpha channel, which is in addition to its color channels (typically red, green and blue).

image29

In the above example we’ve added a rough black on white alpha channel to the texture on the sphere, making it look like a broken or torn framework.

image16

The texture is shown above (as seen in Photoshop). The dark pink areas are the alpha channel, created using black “paint” in Photoshop.

Bumpiness and Normal Maps

When a surface is “bumpy” we use a trick called a “normal map” which is really just another kind of texture. A normal map gives the look of more detail than the underlying geometry itself.

image22Material with and without normal map

On the left is the sphere with specular highlights (simulating bump or shine areas) and on the right we add bumps from a normal map.

Here are the color, specular and normal maps from the sphere:

image26

A normal map uses its RGB “colors” to define a normal vector different from that of the underlying geometry. It “bends” or “offsets” the normal from the base geometry by using RGB like XYZ vector components. Because it can vary per pixel its much more detailed than the geometry.

Normals affect light response. By “bending” the normals away from being perpendicular to the surface, light striking the surface may return closer to the eye, creating a highlight, or be pushed farther away, indicating a more glancing angle.

Normal maps can be created a few ways. There are Photoshop plug-ins (including a free one from NVidia) and standalone programs like nDo2 by Quixel and tutorial by philipk.net. Numerous surface modelling and painting programs support creating normal maps as a way of reducing object complexity. A lower resolution model is created from a higher resolution model. The difference between the two is used to create a normal map which (in many cases) can recreate the effect of the lost polygon detail.

Note that bumps created by normal maps don’t affect the profile or geometryof an object. A perfectly round sphere won’t have a profile that looks bumpy due to normal maps. There are other ways to achieve that using, e.g. displacement maps, but they’re not currently a part of the shader library.

{{RIGHT}}

Reflection / Refraction and Environment Maps

Reflection and refraction are beautiful and complex responses of a surface to light. In full generality they require expensive computation to create accurate images. However, we can get much of the effect of real reflection and refraction, within narrow limits, in real-time rendering. To do this, we use a trick called an environment map.

An environment map is in some ways the compliment to a texture map. While a texture wraps around the surface of an object, the environment shows what’s around the object encircling it. One form of an environment map is particularly easy to understand, the spherical panorama:

image24Latitude/Longitude reflection map

The panorama above was created with a “latitude / longitude” mapping, where the horizon is along the middle, looking straight up is at the top, and straight down at the bottom. Left to right is a sweep of 360 degrees around, in other words the left and right edges match.

A reflection effect is created by finding the part of the surrounding image to show on a surface. That’s done using a ray from the eye to the object surface and then bounced out to the environment based on the surface normal.

Shader library 2014 Docs (1)

If we apply this technique to the sphere it looks like this:

image01Sphere with Reflection

Pretty convincing, but because the reflection is only based on the environment map it can’t reflect other nearby objects as a “real” mirror would.

Bump (normal) maps also affect reflection because the normal is used to determine the bounce angle:

image23Sphere with Reflection and Normal sampler

A surface can be only reflective by being mirrored at the outside, or mirrored on the inside of a transparent material like glass. The examples above assumes the mirror layer is under transparent, colored “glass”.

If there is no mirror layer, and we assume the object is solid but transparent. Then some viewing angles reflect away, (when the surface is more facing away) and some viewing angles will refract down and through the object, (when the surface is closer to directly facing the viewer).

A reflection/refraction effect can be combined and created similarly, but has the same limitation of only showing what’s in the environment map.

image07Thick Glass shader with refraction

In the example above a “thick glass” object is reflecting the environment around its edges, and refracting through its middle.

Because of the limitations of real-time shaders for these effects they’re typically used for windows at the back of a set. On the windshield of a car, a reflective surface that’s also partially see through can be used. “See through” here means that objects behind it in the scene are visible.

Shaders and Light Response

To simulate a “light response” there are four lighting techniques in the shaders:

  1. Baked-in or self-lit.
  2. Ambient lighting from the environment around the object.
  3. A lighting map, for nearby lights that are static.
  4. A point light which can be moved, color changed, etc.

Note how the light response techniques go from static to dynamic. They also increase in render cost.

The lighting response techniques are stacked up to be available in most shaders:

Shader library 2014 Docs (2)

Each layer is programmed in the shader, and tricks are used to create each effect.

Baked-in or Self-lit

In the long zone of the scene, or in any area where there’s no lighting influence or there is lighting already in the textures (as from Photogrammetry) a shader can support “self lit” surfaces. This doesn’t mean incandescent, more like direct transfer of color values from the base color texture on the surface to the screen. This is why the background cyclorama or environment bubble is usually considered “self lit.”

Examples: A backdrop cyclorama or CG models from photo reconstruction, whose textures will naturally include any lighting present at the the the photos were shot.

Ambient or Image-based Lighting

Ambient lighting comes from the environment around an object. In reality an object sitting on a yellow floor gets yellowish light against its bottom. In full generality this is rendered using ray-tracing techniques that are too expensive for most real-time applications. So we use a trick.

Image-based lighting (IBL) creates an effect of ambient lighting by bouncing an environment map (spherical image around the object) off the object to light up the base color. The image used for this is low resolution and defocused.

The math is the same as for reflection, but it illuminates the base color, rather than adding its own color.

Ambient lighting appears static, but because its based on an environment map. That map can be rotated into position, and intensity changed.

Here is the sphere with a point light simulating the sun, but no ambient light from the environment map. Note that the back of the sphere is dark. While we could “fill” this area by adding a single color the look would not be very accurate.

image10

Here is the sphere with ambient lighting applied from an environment map. Note how the bottom is receiving the ground color, while the top is being tinted by the sky:

image04

The environment map used for IBL is considerably lower resolution than the backdrop and reflection maps. It looks like this:

image15

Not only is this smaller, its also heavily filtered. Without this the ambient lighting effect can make the surface look reflective. To control and avoid this the shader allows for additional blurring.

Light Map

A light map is a separate texture which “paints” lighting effects onto a surface. Imagine separating the lighting from a photo reconstruction texture into fully diffuse color (the base color) and the lighting affecting it; a color image painting the whole surface of the object which represents only bright sunlight and dark shadows.

On a CG model this technique separates painting the surface for color, from applying light to it. Because of this, the technique allows for swapping lighting effects.

Overall, light-maps allow complex lighting environments to be applied completely separately from the building and coloring of models. Many lights can be accurately represented in a light-map.

Here is a light-map effect based on a single spotlight:

image05

The “lighting texture map” used to create this effect is this:

image09

This particular texture is another “latitude / longitude” map that covers the sphere. You can use any UV projection that covers the object to adequate resolution.

Because the light map is likely to wrap around the object differently from its base color texture, there is a separate UV map used for the lightmap.

Creating light maps requires making the object white, with bumps or normals, then rendering a “baked texture” from the lights shining on it. This can be done with Mental Ray or V-Ray in Maya. See the Maya manual under “Baking lighting.”

Point light

The final light effect is a “Blinn-Phong” point light. This models a single colored point light source at an arbitrary position and having a “sharpness” which determines how much “specular” vs “diffuse” highlighting is created.

The effect on the object is a circle of diffuse light around a highlight (specular). The diffuse area is the base color of the surface, illuminated by the light color. The specular area can be varied in “sharpness” and can be mapped to a texture which replaces the base color in the highlight area.

image25

In the example above the specular texture map looks like “smooth abrasions” on the surface. Using a matching normal map can add corresponding pitting effects (as above).

The specular map used for the above:

image14

The normal map and specular map for this example were both created from the same tiled texture.

Download

Download the Shader Library 2014 package from the Dashboard.