# The Shading Process: An Overview

### Introduction

In this document, shading includes the entire process of computing the color of a point on a surface. The shading process requires the specification of light sources, surface material properties, and volume or atmospheric effects. The interpolation of color across a primitive, in the sense of Gouraud or Phong interpolation, is not considered part of the shading process. Each part of the shading process is controlled by giving a function that mathematically describes that part of the shading process. Throughout this document the term shader refers to a procedure that implements one of these processes. There are thus three major types of shaders:

Lights may exist alone or be attached to geometric primitives. A light source shader calculates the color of the light emitted from a point on the light source towards a point on the surface being illuminated. A light will typically have a color or spectrum, an intensity, a directional dependency and a fall-off with distance.
Surface shaders are attached to all geometric primitives and are used to model the optical properties of materials from which the primitive was constructed. A surface shader computes the light reflected in a particular direction by summing over the incoming light and considering the properties of the surface.
Volume shaders modulate the color of a light ray as it travels through a volume. Volumes are defined as the insides of solid objects. The atmosphere is the initial volume defined before any objects are created.

Conceptually, it is easiest to envision the shading process using ray tracing (see Figure 1, below). In the classic recursive ray tracer, rays are cast from the eye through a point on the image plane. Each ray intersects a surface which causes new rays to be spawned and traced recursively. These rays are typically directed towards the light sources and in the directions of maximum reflection and transmittance. Whenever a ray travels through space, its color and intensity is modulated by the volume shader attached to that region of space. If that region is inside a solid object, the volume shader is the one associated with the interior of that solid; otherwise, the exterior shader of the spawning primitive is used. Whenever an incident ray intersects a surface, the surface shader attached to that geometric primitive is invoked to control the spawning of new rays and to determine the color and intensity of the incoming or incident ray from the color and intensity of the outgoing rays and the material properties of the surface. Finally, whenever a ray is cast to a light source, the light source shader associated with that light source is evaluated to determine the color and intensity of the light emitted. The shader evaluation pipeline is illustrated in Figure 2.

Figure 1: The ray tracing paradigm

This description of the shading process in terms of ray tracing is done because ray tracing provides a good metaphor for describing the optics of image formation and the properties of physical materials. However, the Shading Language is designed to work with any rendering algorithm, including scanline and z-buffer renderers, as well as radiosity programs.

The Shading Language is also used to program three other processes:

Displacement shaders change the position and normals of points on the surface. Displacements are used to place bumps on surfaces.
While still bound to geometric primitives, these shaders operate on depth samples after hiding has been completed. They can be used for the same purposes as regular surface or volume shaders, but can also use information from multiple depth samples at a point to generate surface and volumetric effects that ordinarily would require ray tracing.
Imager shaders are used to program pixel operations that are done before the image is quantized and output.

As described above, a number of parts of the shading process can be customized by providing shaders. Traditionally (before RSL 2.0), the shading pipeline has operated as follows:

1. The displacement shader (if any) is executed.
2. The surface shader is executed.
3. Interior, exterior, and atmosphere shaders (if any) are then executed.

In an RSL 2.0 shading model, the shading pipeline is slightly different, in that the surface shader defines certain 'pipeline methods':

1. If the surface shader defines a displacement method, it is called.
• Otherwise the displacement shader (if any) is executed.
2. The opacity method (if any) of the surface shader is called.
3. The surface method is called.
4. Interior, exterior, and atmosphere shaders (if any) are then executed.

Note that volume shaders do not share member variables with surface shaders. While it is possible to define a shader that contains both surface and volume methods and use the same shader definition in RiSurface and RiAtmosphere calls, separate shader instances will result and each instance will have its own member variables. The atmosphere shader must use message passing (discussed below) to access the surface shader's public member variables.

The surface method can be replaced by these three methods (any of which may be omitted):

```public void prelighting(output color Ci, Oi);
public void lighting(output color Ci, Oi);
public void postlighting(output color Ci, Oi);
```

The prelighting method typically performs texture lookups and BRDF calculations that are independent of the lights. The lighting method contains the illuminance loops that call the lights, and the postlighting method performs any postprocessing that is necessary after the lights are executed. These methods could be leveraged in a future re-rendering implementation. After a light is interactively modified (e.g. changing its position or intensity), the lighting method can be called with only the modified light, re-calculating its contribution.

In PRMan 16.0 we introduced a further elaboration of the the shading pipeline in that the lighting() method may be further specialized by the addition of two new pipeline methods:

```public void diffuselighting(output color Ci, Oi, [irradiance ] );
public void specularlighting(output color Ci, Oi);
```

The lighting() method itself is still called, but only for REYES grids. The two new pipeline methods decouple view-independent shading from view-dependent shading and thus permit the renderer to cache the view-independent portion. See the Physically Plausible Shading application note for more information.

In addition to the standard pipeline stages, shader objects support initialization per-instance and per grid via the construct() and begin() methods. The construct() method is limited in scope and does not permit access to varying data. It can be used to precompute uniform data, or perform other initializations which pertain to all invocations of a shader instance. The begin() method permits data to be initialized before the remainder of the pipeline runs.

Per Instance:

```construct()
```

For REYES grids with a surface method:

```begin()
displacement()
surface()
```

Or for REYES grids with an RSL 2.0 interface:

```begin()
displacement()
opacity() optional
prelighting() optional
lighting()
postlighting() optional
```

When caching diffuselighting() for ray hits the following methods are run:

```begin()
displacement()
opacity() optional
prelighting() optional
diffuselighting()
postlighting() optional
```

Additionally, when the transmission hit mode indicates that opacity should be calculated by running a shader, the renderer may cache the opacity for faster execution. When caching opacity for transmission ray hits, in the presence of an opacity method, the following methods are run:

```begin()
displacement()
opacity() optional
```

When caching opacity for transmission ray hits, in the absence of an opacity method the following methods are run:

```begin()
displacement()
prelighting() optional
postlighting() optional
```

Finally, when running shading on a ray hit, if the surface supports caching of the view-independent shading via a diffuselighting() method, then that color will be present in Ci when the pipeline starts and the following methods will be run:

```begin()
displacement()
opacity() optional
prelighting() optional
specularlighting()
postlighting() optional
```

When a shader is attached to a geometric primitive it inherits a set of varying variables that completely defines the environment in the neighborhood of the surface element being shaded. These state variables are predefined and should not be declared in a Shading Language program. It is the responsibility of the rendering program to properly initialize these variables before a shader is executed.

All the predefined variables that are available to each type of shader are shown in:

• Table 1: Predefined Surface Shader Variables
• Table 2: Predefined Light Source Variables
• Table 3: Predefined Volume Shader Variables
• Table 4: Predefined Displacement Shader Variables
• Table 5: Predefined Imager Shader Variables

In these tables the top section describes state variables that can be read by the shader. The bottom section describes the state variables that are the expected results of the shader. By convention, capitalized variables refer to points and colors, while lower-case variables are floats. If the first character of a variable's name is a C or O, the variable refers to a color or opacity, respectively. Colors and opacities are normally attached to light rays; this is indicated by appending a lowercase subscript. A lowercase d prefixing a variable name indicates a derivative.

All predefined variables are considered to be read-only, with the exception of the result variables, which are read-write in the appropriate shader type, and Cs, Os, N, s and t, which are read-write in any shader in which they are readable. Vectors are not normalized by default.

The geometry is characterized by the surface position P which is a function of the surface parameters (u,v). The rate of change of surface parameters are available as (du,dv). The parametric derivatives of position are also available as dPdu and dPdv. The actual change in position between points on the surface is given by P(u+du)=P+dPdu*du and P(v+dv)=P+dPdv*dv. The calculated geometric normal perpendicular to the tangent plane at P is Ng. The shading normal N is initially set equal to Ng unless normals are explicitly provided with the geometric primitive. The shading normal can be changed freely; the geometric normal is automatically recalculated by the renderer when P changes, and cannot be changed by shaders. The texture coordinates are available as (s,t). Figure 3 shows a small surface element and its associated state.

The optical environment in the neighborhood of a surface is described by the incident ray I and light rays L. The incoming rays come either directly from light sources or indirectly from other surfaces. The direction of each of these rays is given by L; this direction points from the surface towards the source of the light. A surface shader computes the outgoing light in the direction -I from all the incoming light. The color and opacity of the outgoing ray is Ci and Oi. (Rays have an opacity so that compositing can be done after shading. In a ray tracing environment, opacity is normally not computed.) If either Ci or Oi are not set, they default to black and opaque, respectively.

Name Type Storage Class Description
Cs color varying/uniform Surface color
Os color varying/uniform Surface opacity
P point varying Surface position
dPdu vector varying Derivative of surface position along u
dPdv vector varying Derivative of surface position along v
N normal varying Surface shading normal
Ng normal varying/uniform Surface geometric normal
u,v float varying Surface parameters
du,dv float varying/uniform Change in surface parameters
s,t float varying Surface texture coordinates
L vector varying/uniform Incoming light ray direction [1]
Cl color varying/uniform Incoming light ray color [1]
Ol color varying/uniform Incoming light ray opacity [1]
E point uniform Position of the eye
I vector varying Incident ray direction
ncomps float uniform Number of color components
time float varying Current shutter time
dtime float varying The amount of time covered by this shading sample
dPdtime vector varying How the surface position P is changing per unit time, as described by motion blur in the scene.
Result Variables

Ci color varying Incident ray color
Oi color varying Incident ray opacity
 [1] (1, 2, 3) Available only inside illuminance statements.

A light source shader is slightly different (see Figure 4: Light source shader state). It computes the amount of light cast along the direction L which arrives at some point in space Ps. The color of the light is Cl while the opacity is Ol. The geometric parameters described above (P, du, N, etc.) are available in light source shaders; however, they are the parameters of the light emitting surface (e.g., the surface of an area light source)--not the parameters of any primitive being illuminated. If the light source is a point light, P is the origin of the light source shader space and the other geometric parameters are zero. If either Cl or Ol are not set, they default to black and opaque, respectively.

Figure 4: Light source shader state

Predefined Light Source Variables
P point varying Surface position
dPdu vector varying Derivative of surface position along u
dPdv vector varying Derivative of surface position along v
N normal varying Surface shading normal
Ng normal varying/uniform Surface geometric normal
u,v float varying Surface parameters
du,dv float varying/uniform Change in surface parameters
s,t float varying Surface texture coordinates
L vector varying/uniform Incoming light ray direction [2]
Ps point varying Position being illuminated
E point uniform Position of the eye
ncomps float uniform Number of color components
time float uniform Current shutter time
dtime float uniform The amount of time covered by this shading sample.
Result Variables

Cl color varying/uniform Outgoing light ray color
Ol color varying/uniform Outgoing light ray opacity
 [2] Only available inside solar or illuminate statements.

A volume shader is not associated with a surface, but rather attenuates a ray color as it travels through space. As such, it does not have access to any geometric surface parameters, but only to the light ray I and its associated values. The shader computes the new ray color at the ray origin P-I. The length of I is the distance traveled through the volume from the origin of the ray to the point P.

Name Type Storage Class Description
P point varying Light ray origin
E point uniform Position of the eye
I vector varying Incident ray direction
Ci color varying Ray color
Oi color varying Ray opacity
ncomps float uniform Number of color components
time float uniform Current shutter time
dtime float uniform The amount of time covered by this shading sample.
Result Variables

Ci color varying Attenuated ray color at origin
Oi color varying Attenuated ray opacity at origin

The displacement shader environment is very similar to a surface shader, except that it only has access to the geometric surface parameters. It computes a new P and optionally a new N and dPdtime. In rendering implementations that do not support the Displacement capability, modifications to P or dPdtime will not actually move the surface (change the hidden surface elimination calculation); however, modifications to N will still occur correctly.

Name Type Storage Class Description
P point varying Surface position
dPdu vector varying Derivative of surface position along u
dPdv vector varying Derivative of surface position along v
N normal varying Surface shading normal
Ng normal varying/uniform Surface geometric normal
I vector varying Incident ray direction
E point uniform Position of the eye
u,v float varying Surface parameters
du,dv float varying/uniform Change in surface parameters
s,t float varying Surface texture coordinates
ncomps float uniform Number of color components
time float uniform Current shutter time
dtime float uniform The amount of time covered by this shading sample.
dPdtime vector varying How the surface position P is changing per unit time, as described by motion blur in the scene.
Result Variables

P point varying Displaced surface position
N point varying Displaced surface shading normal
dPdtime vector varying How the displaced surface position P is changing per unit time

An Imager shader manipulates a final pixel color after all of the geometric and shading processing has concluded.

In the context of an imager shader, P is the position of the of the pixel center in current space as it is for all shaders. The other geometric variables have their usual meanings. The variables u and v run from 0 to 1 over the entire output image (over the ScreenWindow).

The imager shader environment also provides access to texture mapping variables s,t, which are the texture mapping coordinates over the ScreenWindow. These coordinates represent pixel centers, such that calls to texture() can map an appropriately prepared image over the entire output resolution. A raster coordinate may be obtained using this formula: (s*xres-0.5,t*yres-0.5).

A PixelSampleImager manipulates pixel samples, prior to pixel filtering. Such a shader has access to the exact (possibly jittered) pixel positions of the samples, as specified by RiPixelSamples. The shader is still applied after all geometric processing and hidden surface determination is complete. Both Imager and PixelSampleImager shaders have access to any arbitrary output variables generated by shading.

Name Type Storage Class Description
P point varying Surface position
dPdu vector varying Derivative of P along u
dPdv vector varying Derivative of P along v
E point uniform Position of the eye
I vector varying Ray direction from eye to P in current space
u,v float varying Surface parameters, running 0-1 over the output image resolution
du,dv float varying/uniform Change in surface parameters
s,t float varying Texture coordinates
Ci color varying Pixel color
Oi color varying Pixel opacity
alpha float uniform Fractional pixel coverage
ncomps float uniform Number of color components
time float uniform Current shutter time
dtime float uniform The amount of time covered by this shading sample.
Result Variables
Name Type Storage Class Description
Ci color varying Output pixel color
Oi color varying Output pixel opacity

Certain shader parameters beginning with __ are interpreted specially by the renderer and help control the shading pipeline.

A value of 0 means that the shader does not compute opacity (i.e Oi == Os). This can be used to override a transmission hit mode of shader. For such shaders, the opacity() method will be skipped for transmission rays.

A value of 1 means that the shader does indeed compute opacity. Such shaders will be run to evaluate their opacity for transmission rays. That result may be cached by the renderer, and thus must be view-independent.

A value of 2 means that the shader computes opacity in view-dependent manner. As such the renderer will avoid caching opacity for transmission rays. The opacity is still cached for the purpose of controlling continuations on diffuse and specular rays, but view-dependent shadows may be generated using areashadow() or transmission(). For mode 2, the opacity() method must only depend on view-dependent entities within a check for raytype == "transmission".

The getlights() shading function may be optionally passed a category, which it matches against the shader parameter __category when constructing a list of shaders to return. getshaders() may also use category matching to control which shaders are returned.

The matching syntax is described in the Shader and Message Passing Functions documentation.

__ignoresShaderSpace all shaders Shaders can be annotated with a hint to the renderer indicating that they do not care about shader space. When uniform float __ignoresShaderSpace = 1 is found on a shader the renderer will force the shader space to be identity. Gprims that share identical shaders but have different coordinate systems will be combinable.
__group light shader Lights may use __group to define a light group for the purpose of creating a single AOV output from multiple lights.

A value of 0 means that the shader does not modify shadingrate (i.e shadingratefraction = 1 and depthshadingratefraction = 1). For such shaders, the refinement() method can be skipped when dicing.

A value of 1 (the default) means that the shader's refinement method may indeed request further refinement, so it must be run when dicing.