# Physically Plausible Shading in RSL

# Physically Plausible Shading in RSL

*March, 2011*

- Integration Formulations for the Rendering Equation
- New Integrators, New Pipeline Methods
- Writing a Physically Plausible Material
- Using the
`indirectspecular`Integrator - Writing an MIS-aware Material
- Writing a Physically Plausible Light
- Summary of New Features
- New Standard Shaders
- RSL Translation of
`directlighting()` - Additional References

### Integration Formulations for the Rendering Equation

CGI rendering software approximates **the rendering equation**. This equation
models the interaction of shape, materials, atmosphere, and light, and, like
many physics-based formulations, takes the form of a complex multidimensional
integral. The form of the equation is such that it can only be practically
approximated; this is accomplished by applying generic numerical integration
techniques to produce a solution. The goal of rendering algorithm R&D is to
produce alternate formulations of the equation that offer computational or
creative advantages over previous formulations. At the heart of the rendering
equation is the description of how light is scattered when it arrives at a
surface. The fundamental question is: *what light is scattered to the eye at
this point in the world?* The portion of a rendering program focused on
solving this problem can be called the **surface shader**, or the **material
integrator**. In RenderMan, the canonical surface shader has traditionally
been decomposed into a summation of the results of several independent
integrators.

In 1990, we represented a canonical material like this:

Ci = Ka*ambient() + Kd*diffuse(N) + Ks*specular(I, N, roughness)

Additional terms were later added to simulate incandescence and translucence, but, fundamentally, the simplicity and approximate nature of this formulation was driven by the computational resources available at the time.

By 2005, armed with significantly more computational resources, we could
afford ever more accurate approximations to the physical laws. Rather than
start afresh, new terms accreted onto our canonical material integration,
evolving into a morass of interoperating `illuminance loops`, `gather`
blocks, light categories, and `indirectdiffuse` calls. Moreover, the fact
that many of our integrators rely on pre-baked data means that rendering
pipelines have also grown more complex and therefore difficult to maintain and
comprehend.

In 2011 our computers are faster still. There is a growing school of proponents of the idea that physics-based (ray-traced) rendering is now the most efficient way to produce CGI. The argument is that it is cheaper for a computer to produce physically plausible images automatically than it is for lighting artists to mimic the physical effects with cheaper (i.e. non ray-traced) integrators. If your production's art direction and geometric complexity are compatible with the computing budget and other constraints associated with physically plausible light transport, it's possible that this is true. If so, the time has arrived to embrace geometric sources of illumination (i.e. area lights) and to jettison the venerable, but entirely non-physical, point light source. Once area lights enter the picture, integration becomes more expensive. This is largely due to the fact that the shadows cast by area lights are expensive to compute. Add to that HDR IBL (high dynamic range, image-based lighting) and the previous generation of RSL shader has been pushed past a limit. What we need is new integration support from the renderer.

### New Integrators, New Pipeline Methods

Here's a beguilingly simple characterization of a material integrator for the next generation:

public void lighting(output color Ci) { Ci =directlighting(material,lights) +indirectdiffuse(material) +indirectspecular(material); }

The surface shader's lighting method is where we integrate the various
lighting effects present in the scene. To accomplish physically
plausible shading under area lights, we've combined the `diffuse` and
`specular` integrators into the new `directlighting` integrator. Like
our old friends, the new integrator is only concerned with light that
can *directly* impinge upon the surface. Unlike them, the combined integrator
offers computational advantages since certain computations can be shared.
And to support the physical notion that the specular response associated
with indirect light transport paths should match the response to direct
lighting, we introduce a new `indirectspecular` integrator. By moving
the area light integration logic into the renderer we've made it possible
for RSL developers to eliminate many lines of `illuminance` loops
and `gather` blocks.

In modern RSL we can parameterize the new integrators with shader objects.
Here, you see that the material and the list of light shaders appear as formal
arguments to the new shading functions. This approach offers complete
flexibility for shader factoring strategies supporting both coshader and
standalone material configurations. We also leverage modern RSL's *shader
objects* to define **standard material and light interfaces**. The interfaces
take the form of shader-object methods and represent the negotiation between
integrator, light and material to identify important sampling directions. To
interoperate with the new integrators, shader developers must simply provide
implementations for the new standard methods.

To round out the set of extensions, we are introducing **new pipeline
methods** that can be invoked by the renderer. Recall that a surface shader
at a given position may be executed as part of an integration from another
shader at another position. If the upstream integrator is responsible for
solving a diffuse light-transport term we must avoid including specular
(view-dependent) light paths in the solution. The *raytype* can be used
to characterize the upstream integrator and can be consulted to satisfy
this requirement. Taking this one step further, we divide the lighting
method into two new optional shading pipeline methods: **diffuselighting**
and **specularlighting**. Beyond simply eliminating the requirement for
unsightly raytype checks in your shaders, the new methods enable significant
renderer optimizations. A new **radiosity cache** allows us to reuse the
`diffuselighting` result across multiple nearby specular ray probes and this
can produce dramatic speedups in scenes with complex global illumination
effects.

Here, then, is the basic form for a physically plausible surface shader:

shader basic(...) { shader m_lights[]; shader m_material; public void begin() { m_lights = getlights(); m_material = getshader("material"); // we can use 'm_material = this;' if we want to implement // the standard material interface in this compilation unit } public void opacity(output color Oi) { m_material->opacity(Oi); } public void prelighting(output color Ci, Oi) { m_material->prelighting(Ci, Oi); } public void lighting(output color Ci, Oi) { Ci += directlighting(m_material, m_lights) + indirectdiffuse(nsamps, m_material) + indirectspecular(m_material); } public void diffuselighting(output color Ci, Oi) { Ci += directlighting(m_material, m_lights) + indirectdiffuse(m_material); } public void specularlighting(output color Ci, Oi) { Ci += directlighting(m_material, m_lights) + indirectspecular(m_material); } }

Clearly, additional integration terms and pattern generators have
roles to play in this framework. For example, if you want subsurface
scattering you might add the appropriate RSL to both the `lighting` and
the `diffuselighting` methods. Special materials may still be implemented
with custom integration stategies, but as you might expect, materials
implemented within the built-in integration scheme will likely offer
higher performance.

### Writing a Physically Plausible Material

There are only a few, straightforward aspects to writing a basic plausible
material that's compatible with the new integrators. First and foremost,
you must implement the `evaluateSamples` method. Integrators first request
samples from light shaders and it is this collection of samples that are
delivered to your material's `evaluateSamples` method. The combination of an
individual sample with the `I` vector forms the input to your BRDF
formulation. Your job is just to loop over the input radiance samples
writing to the `materialResponse` field of the `__radiancesample`
struct. The renderer may invoke your `evaluateSamples` method one or more
times to obtain a final result and provides its integration mode via the
`distribution` parameter (currently one of "diffuse" or "specular").

For example:

// material->evaluateSamples void evaluateSamples(string distribution, output __radiancesample samples[]) { if(distribution == "diffuse") { uniform float i, nsamps = arraylength(samples); for(i=0;i<nsamps;i+=1) samples[i]->materialResponse = M_INVPI * (m_Kd * m_Nn.samples[i]->direction); } else if(distribution == "specular") { evaluateSamplesAS(m_ks1, m_roughnessA1, m_roughnessB1, dPdu, Nn, In, samples); accumulateMaterialResponse(samples); evaluateSamplesAS(m_ks2, m_roughnessA2, m_roughnessB2, dPdu, Nn, In, samples); accumulateMaterialResponse(samples); } }

It is the responsibility of the material to compute the `materialResponse`
for each incoming sample according to the integrators' distribution requests.
These samples may arrive from any source of radiance, along direct or indirect
light paths. In the example above, our material has a Lambertian diffuse
component, which is trivial to compute from the surface normal and the
incoming radiance direction. For our specular response, we employ a new
built-in anisotropic specular term, from the Ashikhmin/Shirley shading model,
to produce our material response. Since this shader's specular model has two
specular lobes, we invoke the `evaluateSamplesAS` function twice and
accumulate them with a new `accumulateMaterialResponse` utililty function.
To produce a single-lobed specular response (using Phong, Microfacet, etc.), we
could simply follow the pattern of our Lambertian response and write directly
to the `materialResponse` field of each sample.

### Using the `indirectspecular` Integrator

So far our material has been able to respond to incoming sources of
radiance but unable to deliver reflection and refraction (indirect specular)
effects. We must assist the renderer by identifying important specular
directions and resort to the new standard interface, the `generateSamples`
method, to accomplish this. As we'll see below, this same interface is
useful for direct lighting in the context of multiple importance sampling.

For now, let's worry about reflections. The RSL `gather` construct is
a very powerful tool used most often to compute indirect specular effects.
To use `gather`, we provide a *cone* that approximates the specular
transport at the current shading point. When radiance sources are found
within this code via ray tracing, `gather` passes them to your integrator
as represented by the hitblock. In this model, `gather` generates samples
within your cone and then your code evaluates and integrates those samples
into a final indirect specular result. To achieve efficient indirect specular
results, it's helpful to provide the renderer with additional clues within the
cone characterizing the specular response at an individual direction. To serve
this requirement, we extended gather to support shader-provided samples with
concomitant importance weights. Now we take this one step farther to
facilitate the unification of specular response to direct *and* indirect light
with the introduction of the `indirectspecular` integrator and a
standardized material method: `generateSamples`. In order to use
`indirectspecular`, a material must implement this new standard method. As
with the `evaluateSamples` method, we are given a distribution parameter,
but, unlike `evaluateSamples`, we're responsible for *creating* an array of
samples according to our integration requirements. Again, the `distribution`
parameter can guide the response to `generateSamples` and may be one of
"specular", "indirectspecular", or "diffuse" according to the integration
requirements of your material. N.B.: currently "diffuse" isn't implemented,
as multiple importance sampling of diffuse distributions is less efficacious.

Here's a short example built atop the new anisotropic specular (AS) shading functionality:

// material->generateSamples void generateSamples(string distribution, output __radiancesample samples[]) { if(distribution != "diffuse") { reserve(samples, m_nsamps); generateSamplesAS(m_ks1, m_nsamps/2, m_rougnessA1, m_roughnessB1, Nn, dPdu, In, samples); generateSamplesAS(m_ks2, m_nsamps/2, m_rougnessA1, m_roughnessB1, Nn, dPdu, In, samples); uniform float nsamplesPerComponent[2] = {m_nsamps/2, m_nsamps/2}; normalizeMaterialResponse(samples, nsamplesPerComponent); } }

In the simplest case, we must choose the size of our samples array, then
write to the `direction`, `distance`, and `materialResponse`
fields of each sample. We would employ a sample generator within the
reflection code implied by the surface normal and our material roughness
or shininess. In our two-lobed material, we divide our sample
budget across two distributions and let `generateSamplesAS` produce the
samples for us. When combining multiple specular distributions independently
we must normalize them to support energy conservation laws. This is performed
by the new `normalizeMaterialResponse` shading function. As we'll see, the
enforcement of physical laws remains optional; it's entirely possible
to enforce or ignore them as creative requirements dictate.

### Writing an MIS-aware Material

The new directlighting integrator uses a technique called
**Multiple Importance Sampling** (MIS) to compute the light transfer.
This means that the lighting integral is solved by generating samples
from both the lights and the surfaces and carefully combining them to
minimize noise. At a high level, the parts where the surface does a
good job and the parts where the light does a good job are combined in
an unbiased way to get a solution that is better than either light
sampling or material sampling alone. We can express this in pseudo-code
as follows:

foreach light l l->generateSamples(samples) material->evaluateSamples(samples) l->shadowSamples(samples) MISAccumulateSamples(samples) material->generateSamples(samples) foreach light l l->evaluateSamples(samples) l->shadowSamples(samples) MISAccumulateSamples(samples)

Notice that for MIS to work both the light and the surface need to generate
and evaluate samples. MIS also requires that every sample contains a
probability density function (**PDF**) for
that sample direction for both the light and the material so that the
appropriate MIS weights can be determined for each sample.

For a material to work with the new integrators it needs to implement a pair
of methods, `evaluateSamples` and `generateSamples`. `evaluateSamples`
takes a set of samples generated by the light and determines the
material response for each sample. `generateSamples` creates a new set
of samples that holds the material response and sample direction.

We can show how this is done with a simple specular Phong model. We
will generate samples *uniformly* over a cone with an angle of
`maxConeAngle` and then weight by the Phong reflection model. For
`evaluateSamples` we will see if the direction lies inside the cone and
if so, weight by the reflection model. This simple strategy makes for
a good example, but is generally insufficient as a means to minimize
variance. More sophisticated implementations, as embodied in our
`generateSamplesAS` and `generateSamplesEnv` functions, strive to
deliver *non-uniform* sample distributions and PDF values.

`generateSamples` must fill in the direction and distance (infinity) for
each sample, as well as the `materialResponse` (BRDF value) and the
`materialPdf` (note the comments in the example):

// material->generateSamples public void generateSamples(string distribution; output __radiancesample samples[]) { if(distribution != "diffuse" && m_nsampsSpec > 0) { uniform float i; resize(samples, m_nsampsSpec); for(i=0;i<m_nsampsSpec;i+=1) { // sample the cone centered around m_Rn // do this by drawing samples uniformly along m_Rn (cosTheta) // and uniformly around the cone (phi) and then constructing // a direction using the basis m_Rn, M_Tn, and m_Bn float cosTheta = 1 - random()*(1 - m_cosMaxConeAngle); float sinTheta = sqrt(1 - cosTheta * cosTheta); float phi = random() * M_TWOPI; vector Ln = (m_Rn * cosTheta + m_Tn * sinTheta * cos(phi) + m_Bn * sinTheta * sin(phi)); samples[i]->direction = Ln; // these samples should go out a long long way (infinity) samples[i]->distance = 1e20; // compute the Phong BRDF for the generated direction // NOTE: many BRDF models have a 1/cos term that mostly // cancels out the NdotL term that is in the illumination // integral over the hemisphere. Phong doesn't have this // term and didn't include an NdotL term, so we do not include // an NdotL term in the materialResponse for the model: // RdotL = m_Rn.Ln = cosTheta float phong = pow(cosTheta,m_exponent); samples[i]->materialResponse = specularColor * phong; // set the materialPdf for this sample. Since we are // generating samples uniformly from the cone, this is // simply 1/solid angle of the cone. samples[i]->materialPdf = m_conePdf; } } }

`evaluateSamples` must fill in the `materialResponse` (BRDF value) and
`materialPdf` for each sample (again, note the comments in the example):

// material->evaluateSamples public void evaluateSamples(string distribution; output __radiancesample samples[]) { if(distribution != "diffuse") { uniform float i, alen = arraylength(samples); for(i=0;i<alen;i+=1) { float RdotL = m_Rn.samples[i]->direction; // it is necessary that evaluateSamples for specular // only set the materialPdf > 0 if that direction could // actually be generated by generateSamples if(RdotL < m_cosMaxConeAngle) // outside the cone we sample { samples[i]->materialResponse = 0; samples[i]->materialPdf = 0; } else { float phong = pow(RdotL,m_exponent); samples[i]->materialResponse = specularColor * phong; // the sample could have been generated with probablity // of 1/solid angle of the cone, since we are uniformly // sampling the cone in generateSamples. // if no samples would have been generated in // generateSamples we need to set the materialPdf to 0 if(m_nsampsSpec > 0) samples[i]->materialPdf = m_conePdf; else samples[i]->materialPdf = 0; } } } }

There is one very important constraint on `generateSamples` and
`evaluateSamples`: if a sample direction and response could have been
generated by `generateSamples` with some non-zero PDF value, when `evaluateSamples`
is called with that same direction the PDF value *must* be the same. For the
example above this means that `evaluateSamples` has to check the same cone
as `generateSamples` *and* must set a PDF value to 0 outside the cone (since
`generateSamples` can't generate a direction outside of the cone). If the
number of samples created by `generateSamples` is 0, then
`evaluateSamples` must also set the PDF to 0.

### Writing a Physically Plausible Light

It's much less likely that you'll be spending time writing light shaders. This is because the richness of lighting effects has less to do with the variety of lights in the scene and more to do with the richness of surfaces and their locations relative to each other and the lights. We provide a basic collection of light shaders and you may find that they satisfy most of your requirements. If not, it's still quite easy to develop a light shader.

First and foremost, a light must respond to the standard `generateSamples`
method for lights. The material provides an integration domain ("hemisphere"
or "sphere") that specifies whether samples below the "horizon" are of
interest. A non-physical point light should only generate a single sample,
but it should be noted that the renderer will infer this behavior when it
encounters lights that don't support the new light interface. The
responsibility of the light's `generateSamples` method is to provide an
array of `__radiancesample` structs with these fields filled in for each
sample:

`direction`**-**the normalized direction from the surface point to the light sample.`distance`**-**the distance between the surface point and light sample.`radiance`**-**the emitted light at the sample along the sample direction.`lightPdf`**-**the PDF value for the sample. Typically this represents a measure of the solid angle as a fraction of the integration domain.

Next, a light may respond to the standard `shadowSamples` method. In the
context of physically plausible rendering, the biggest computational load is
often the computation of the *visibility function*. This suggests a design
where a light's sample generator is decoupled from the computation of
shadowing. Now the integrator can request the expensive visibility function
after the combination of light intensity with material response. As a
consequence, many fewer visibility tests may result. To respond to the
`shadowSamples` request, a traditional shadow map lookup is insufficient
due to the fact that it encodes visibility of the scene to an individual
point. We recommend that light shaders employ the new `areashadow` shading
function since it supports both deep shadows and full-blown ray tracing
visibility queries.

Finally, to support the full MIS framework, a light must implement the
`evaluateSamples` method. In this context the material has identified
important directions and the light must now write to the `radiance` and
`lightPdf` fields. Typically, the light must determine whether a ray
from the surface point along the provided direction would intersect the
geometry of the light. For simple area light shapes, this is a simple
computation. If the intersection point is at a distance farther than
the current value of the `distance` field, a light should leave the sample
untouched. To handle arbitrary shapes for area lights, a future
release of PRMan will offer facilities to automatically compute
the intersection.

It is also important to remember, as pointed out in the previous section, that
the light *must* produce the same radiance and `lightPdf` values whether
from `generateSample` or `evaluateSample`. Assymetries will produce
unbalanced results.

### Summary of New Features

#### The New Pipeline Methods

The new material pipeline methods play the important role of expressing the separation of view-independent and view-dependent lighting. View-independent lighting and opacity is cached in the new radiosity cache and this can deliver significant speedups.

public void diffuselighting(output color Ci, Oi); public void specularlighting(output color Ci, Oi);

#### The New Integrators

The new integrators hide the complex details associated with importance sampling and support area lightsources. Integrators invoke the standard sample generation and evaluation methods of lights and materials. The basic unit of information transfer is an array of radiance samples.

colordirectlighting(shader material, shader lights[], ...)

Name | Type | Description |
---|---|---|

"integrationdomain" |
string |
Specifies the domain over which the directlighting integration
is performed. Use sphere when your material is translucent
and hemisphere when it is opaque. |

"mis" |
uniform float |
Specifies the sampling strategy employed by the integrator.
Use 0 to consider light samples only. Use 1 to perform
multiple importance sampling, considering both light and material
samples. Use 2 to consider material samples only. Usually
the value 1 is suggested. A value of 2 is suggested for
very narrow specular distributions (i.e. chrome) and 0 is
suggested for very small light sources. |

"diffuseresult" |
output color |
The results of the diffuse portion of integration are written
to the optional output parameter. When directlighting is called
within the specularlighting method, the diffuseresult parameter
will always be 0. |

"specularresult" |
output color |
The results of the specular portion of integration are written
to the optional output parameter. When directlighting is called
within the diffuselighting method, the specularresult parameter
will always be 0. |

"materialsamples" |
__radiancesample [] |
If the optional samples array is provided and has non-zero length,
we use these in place of the material's generateSamples result
in multiple-importance sampling modes. If the array is provided
and has zero length, we fill it with the samples produced by the
material's generateSamples. When used with indirectspecular
it is suggested that the indirectspecular term be called
prior to directlighting and that the samples generated by
the indirectspecular integrator be passed to directlighting. |

"heuristic" |
string |
Alters the variance reduction heuristic that is used to balance between
samples the light generated and samples the material generated when MIS
is in effect. The default string The string |

colorindirectspecular(shader material, ...)

Name | Type | Description |
---|---|---|

"materialsamples" |
__radiancesample [] |
If the optional array is provided and has non-zero length,
we use these in place of the material's generateSamples result.
If array is provided with zero length, we fill it with the samples
produced by the material's generateSamples. When used with
directlighting for simple specular distributions we suggest
that the indirectspecular term be called prior to
directlighting and that the samples generated by the
indirectspecular integrator be passed to directlighting. |

"integrationdomain" |
string |
This parameter is passed to the material's generateSamples method.
Valid values are hemisphere and sphere. |

"samplebase" |
float |
Specifies the (float) jittering of the hemisphere ray origins. The default value is 0 (no jittering). A value of 1 corresponds to jittering over the size of a micropolygon. |

"bias" |
float |
Specifies a float value to offset hemisphere sample origins. If not
provided the trace bias attribute is used. |

"maxdist" |
float |
Specifies the maximum distance to consider when performing intersection calculations. The default value is 1e30. |

"subset" |
string |
A trace set to consider for ray intersections. Only objects that are
members of the named groups will be visible to the traced rays. Sets
are defined with Attribute "grouping" "membership". |

"othreshold" |
color |
An opacity threshold that determines termination criteria for automatic
continuation of rays through semi-transparent surfaces. Using
color(0) effectively disables automatic continuation for the given
call. The implementation sets the default opacity threshold, typically
color(1 - ``*epsilon*)``. |

"ohitthreshold" |
color |
an opacity threshold that determines when a ray is considered to have
"hit" anything or not, for the purposes of choosing which of the "hit"
or "miss" code blocks to execute. When it is desirable to use shading
(or texture maps) to cause parts of objects to be "invisible" to rays,
the shaders on those objects should set Ci and Oi to zero, for those
parts. Rays that probe those objects will ignore them when
the returned Ci and Oi are less than the "ohitthreshold"
value. Using color(0) for this parameter causes shading at
intersection points to be ignored for hit-testing purposes. The
implementation sets the default threshold, typically color
( epsilon ). |

"type" |
string |
Explicitly specifies the ray type for all rays spawned by this call. Valid values are "specular", "diffuse", and "transmission"; the default type is "specular". |

"label" label |
string |
Specifies the label associated with all rays spawned by this call. (Ignored for point-based calculations.) |

"hitsides" hitsides |
string |
Specifies which side(s) of one-sided surfaces can be hit by the rays. The possible values are "front", "back", and "both". The default is "both". |

#### The Standard Radiance Sample Construct

The basic unit of information exchange, the radiance sample is both generated and evaluated by both lights and materials. The pair of generate/evaluate pairs is central to the implementation of multiple importance sampling.

struct __radiancesample { varying color radiance = 0; // (Cl, volume:Ci) varying vector direction = 0; // (normalize(L), ray:direction) varying float distance = 0; // (length(L), ray:length) varying float lightPdf = 0; // light's PDF varying color lightVisibility = 0; // light-to-surface shadowing term varying float materialPdf = 0; // material PDF varying color materialResponse = 0; // material's BRDF response varying color accumulatedMaterialResponse = 0; // multilobe BRDF response };

#### The Standard Material Integration Interface

Materials passed to the new integrators must implement the standard material interface methods. A material can be thought of as a collection of BRDF components (lobes). In responding to the integrator requests, a material must deliver sampling directions that are distributed across its components according to the distribution type and its components' types.

public voidgenerateSamples(string distribution; output __radiancesample samples[]);

`__radiancesample`fields:

`direction`,

`distance`(usually a large value),

`materialResponse`and

`materialPdf`. The

`distribution`field describes the context for the generateSamples request and may be helpful in determining which (and how many) samples to generate. Valid values for distribution are currently: "diffuse", "specular" and "indirectspecular".

public voidevaluateSamples(string distribution; output __radiancesample samples[]);

`samples`array. Typically the

`__radiancesample`fields:

`direction`and

`length`are inspected to determine the value for

`materialResponse`and

`materialPdf`. The

`distribution`parameter, allows us to treat diffuse and specular separately.

#### The Standard Light Integration Interface

Lights passed to the `directlighting` integrator must implement the
standard light integration interface. Lights generate samples independent of
the material, but may be tuned to the differential area of the receiver.

public voidgenerateSamples(string integrationdomain, output __radiancesample samples[]);

Integrators can request that a light generate samples over its area
relative to the integration domain ("sphere", "hemisphere") of a surface
point. Point lights should return a single sample but renderers may infer
point-light status from the absence of this method. Samples include a PDF
weight associated with the light distribution. The PDF must be expressed
in terms of integration domain at the receiving surface. For an
area light with uniform sampling, the light's PDF would be `1/area`.
To express the area PDF as the integrationdomain PDF we multiply by
`d^2/cos` where `d` is the distance from the light sample to the
receiver and `cos` is the cosine of the angle between the light normal
at the sample point and the vector from the sample point to P. The
resulting PDF is `d^2/(cos * area)`.

We expect lights to produce a number of samples tuned to the receiver
point. This means that the sample counts are varying quantities. But
since the RenderMan Shading Language only supports vectors with a uniform
length, we require lights to assign 0 to the `lightPdf` field where
samples should not be sent.

public voidevaluateSamples(string integrationdomain; output __radiancesample samples[]);

`materialResponse`to the samples produced by the light. For values that intersect the light

*and*produce a BRDF-response, the light should produce an unnoccluded

`radiance`and lightPdf value. For

*pseudo area lights*, there will typically be an analytical ray intersection with some shape here.

public voidshadowSamples(output __radiancesample samples[]);

`generateSamples`and

`evaluateSamples`calls we have collected nearly enough information to produce the final integration result, the

*exitant radiance*, or

`Ci`. The missing piece is the expensive visibility function between light and surface. At this point the

`directlighting`integrator invokes the

`shadowSamples`method to obtain a value for

`lightVisibility`. The expectation is that shadowing computation is performed between the sample's light and surface positions. As a optional optimization, we can skip computing the true visibility value where the combination of light and material responses is near zero. The

`areashadow()`function can perform this optimization internally, i.e. it avoids looking up shadows where the exitant radiance is near zero. If you wish to perform your own shadowing computations, note that shadowing should only be done where

`(accumulatedMaterialResponse+materialResponse)*radiance`exceeds some threshold. The

`areashadow()`function can still perform this optimization for you but will base its decision on the optional

`weight`parameter.

#### New Standard Sampling Functions

Since sample generation can be quite tricky, we've implemented some common schemes. We expect this collection to grow over time.

uniform floatevaluateSamplesAS(varying color weight; varying float exp1, exp2; varying vector Nn; varying vector Tn; varying vector Vn; // Vn.Nn > 0 output __radiancesample[] samples); uniform floatgenerateSamplesAS(varying color weight; uniform float numSamples; varying float exp1, exp2; varying vector Nn; varying vector Tn; varying vector Vn; // Vn.Nn > 0 output __radiancesample[] samples);

This generates and evaluates the specular lobe of the *Ashikhmin/Shirley*
BRDF. The caller provides the orthonormal basis via `Nn` and `Tn` and
the extent of anistropic cone is captured via `exp1, exp2`, which
are the exponents of the cosine lobe, denoted `nu` and `nv` in the
original paper. In practice, these terms acts as *shininess* rather than
roughness, that is, larger exponents result in tighter highlights.

The Fresnel term is handled by the user, either through the
weight parameter or by modifying each sample after generation. The
`materialResponse` member on samples includes the `Nn.Ln` term
necessary for proper integration over the hemisphere, as well as the
`1/max(Nn.Vn,Nn.Ln)` term that isn't part of the importance sampling.

Users may replace the `materialResponse` member of samples with their
own BRDF if they simply want the functions to generate directions
distributed according to an anisotropic cosine lobe. It is important
to include the extra `Nn.Ln` in the `materialPdf` term for proper
integration over the hemisphere when doing this.

uniform floatevaluateSamplesEnv(uniform string envMapName, varying point P, varying normal N, output __radiancesample[] samples, ...); uniform floatgenerateSamplesEnv(uniform string envMapName, varying point P, varying normal N, uniform float nSamples, output __radiancesample[] samples, ...);

`distribution`parameter.

Name | Type | Description |
---|---|---|

"integrationdomain" |
string |
Requests that samples be treated as from within the provided domain.
Currently we expect either "sphere" or "hemisphere" domains. |

"distribution" |
string |
Produces samples following the requested distribution within the
integrationdomain relative to the surface position.. Available
distributions include: "warped", "cosine", "uniform"
where "warped" produces samples distributed according to the
relative intensities present in the environment map. The
default distribution is "warped". |

"environmentspace" |
string or uniform matrix |
Controls the coordinate-system tranformation between the shading
space and that in which the environment map is defined.
The default environmentspace is "current". |

"scale" |
color |
Multiplies environment map valeus by the provided scale. The default
scale is: (1,1,1) |

### New Standard Shaders

We provide a collection of plausible materials that provide production
quality results, but also serve as excellent examples. They rely upon
the new standard sampling functions for sample generation. Example RIB files
that use these materials can be found in `$RMANTREE/lib/examples/plausible`.
The shaders themselves can be found in `$RMANTREE/lib/rsl/shaders`.

#### Standard Material Shaders

plausibleConductor plausibleDialectric plausibleGlass plausibleMatte

#### Standard Light Shaders

plausibleArealight plausibleEnvlight plausibleSunlight

#### Utility Functions

These utility functions are useful when modeling a BSDF as a composition of
individual components, i.e., with multiple specular *lobes*.

voidaccumulateMaterialResponse(output __radiancesample samples[], [varying colorlayerOpacity]);

`evaluateSamplesAS`should be followed by

`accumulateMaterialResponse`. The optional

*layerOpacity*parameter allows you to specify a layer compositing weight, which is used to influence the MIS computations. When no layering is required, the

`layerOpacity`value needn't be provided; this is equivalent to setting the value to

`color(1)`.

voidnormalizeMaterialResponse(output __radiancesample samples[], uniform floatnsamplesPerComponent[], [varying colorlayerOpacities[])

Similarly, when generating samples each component should contribute samples
to the result. Now, while there may be multiple calls to, e.g.,
`generateSamplesAS`, there must be a single call to
`normalizeMaterialResponse`. We assume that each component may contribute
a unique number of samples; this is represented in the
*nsamplesPerComponent* array. The length of this array is equal to the
number of components that contribute to the samples. Finally, as above,
*layerOpacities* is an optional parameter which, if not provided, is
equivalent to array of 1 and with a length equal to the length of
`nsamplesPerComponent`.

The `normalizeMaterialResponse` communicates the number of samples per
sample set to the renderer, so that it can apply heuristic weights properly.

### RSL Translation of `directlighting()`

The built-in `directlighting()` integrator has performance advantages for a
number of reasons. In part, due to its special treatment and allocation of the
`__radiancesample` struct it is able to perform better than the equivalent
RSL code. However, in order to be clear about the interaction between the
built-in shadeops and the math that `directlighting()` implements, we have
provided an RSL translation of the integrator as part of the distribution.
The `$RMANTREE/lib/examples/plausible/directlighting` directory contains a
co-shader implementation (`directlightingIntegrator`) that elucidates the
inner workings of the plausible system. The example
`plausibleExampleRSLIntegrator2LobeDielectric.sl` shader uses this co-shader
to perform multiple importance sampling of a two-lobe dielectric type material.

### Additional References

`${RMANTREE}/lib/rsl/shaders`contains several example shaders.`${RMANTREE}/lib/examples/plausible`contains example RIBs for the sample shaders.`${RMANTREE}/lib/rsl/include/stdrsl`contains standard code you can use in your own shaders.- Rob Cook and Ken Torrance, "A reflectance model for computer graphics", Computer Graphics (Proc. SIGGRAPH 81), vol. 15, num. 3, pp. 307-316.
- James Kajiya, "The rendering equation", Computer Graphics (Proc. SIGGRAPH 86), vol. 20, num. 4, pp. 143-150.
- Robert Lewis, "Making shaders more physically plausible", Fourth Eurographics Workshop on Rendering, Jun. 1993, pp. 47-62.
- Eric Veach and Leonidas Guibas, "Optimally combining sampling techniques for Monte Carlo rendering", Computer Graphics (Proc. SIGGRAPH 95), pp. 419-428.
- M. Ashikhmin, P. Shirley, "An Anisotropic Phong BRDF Model", Journal of Graphics Tools, 5(2), 2000, pp. 25–32.
- Matt Pharr and Greg Humphreys,
*Physically Based Rendering: From Theory to Implementation*, 2nd edition, Elsevier, 2010. - Mark Colbert, Simon Premoze, Guillaume Francois,
*Importance Sampling for Production Rendering*, SIGGRAPH 2010. - http://en.wikipedia.org/wiki/Probability_density_function