Path Tracing in PRMan

Path Tracing in PRMan

May 2013


In recent years, the off-line rendering world has a seen a shift from multipass rasterization techniques towards single pass ray tracing, particularly for global illumination effects. PRMan's support for this has traditionally focused on distribution ray tracing and the physically plausible framework is built with distribution ray tracing in mind.

An alternative approach, path tracing, has appeal for interactive preview. Rather than expensive but accurate estimates of the radiance from surface interactions, path tracing combines a large number of cheap but individually inaccurate samples to produce a good estimate. It can produce very fast but noisy initial results and then refine them over time. This rapid turn around is very desirable for re-rendering.

To provide users with the flexibility to take advantage of these benefits, RPS 18 introduces a new path tracing mode for PRMan's raytrace hider.

Path Tracing with the Raytrace Hider

To this end, the raytrace hider now accepts a new parameter - "string integrationmode" ["distribution" | "path"] - which simply tells the raytrace hider whether we wish to use distribution ray tracing or path tracing.

Hider "raytrace" "string integrationmode" ["path"] "int minsamples" [1] "int maxsamples" [256] "int incremental" [1]

When integrationmode is set to "path" it does three things:

  • Features that only makes sense when using distribution-style integration are automatically disabled (specifically, the radiosity cache is disabled).
  • The renderer provides an option that can be queried by shaders to determine if the render is in "path" mode.
  • The hider will iterate over the image, tracing one ray per pixel at each iteration until the maximum number of paths per pixel have been traced.

Each iteration will accumulate the result and update the display to show the results as they converge.

Off-line Rendering

When rendering to images on disk, iterating over an image repeatedly will not work properly with many display drivers. Instead, the hider will complete all paths in a bucket before it is written to the display. The maximum number of paths is controlled by the "int maxsamples" parameter.


The "maxsamples" parameter is optional, defaulting to 0. When 0, maxsamples is set to the number of sub-samples as defined by PixelSamples. (Note that PixelSamples does not apply in PRMan's RIS mode.)

On-line Rendering and Re-Rendering

Interactive preview of a rendering (say to 'it' or 'rv') can be made incremental by adding the "int incremental" [1] flag to the hider:

Hider "raytrace" "string integrationmode" ["path"] "int minsamples" [1] "int maxsamples" [256] "int incremental" [1]

Buckets will be sent to the hider multiple times (in this case, once for each of 256 iterations). PRMan handles accumulation of the incremental results, so no changes need to be made to interactive viewers other than the ability to accept the same bucket more than once for a given image.

Since re-rendering is already a progressive (and thus incremental) render, the incremental flag is not needed. Simply issue the Hider line before EditWorldBegin() and interactive, progressive path tracing-compatible re-rendering will happen. For more information, please consult the Re-rendering application note.

The following RIB snippet is a simple example of path-traced re-rendering:

Hider "raytrace" "string integrationmode" "path"
                 "int maxsamples" 256
                 "int incremental" 1
Option "rerender" "int[2] lodrange" [0 2]
Option "bucket" "string order" "spiral"
EditWorldBegin "bakedb"
               "string rerenderer" "raytrace"
# ...

Reducing Branching

While shaders written to our physically plausible framework currently use distribution ray tracing, much of the model of light interaction that that framework embodies - BRDFs, area light sources, PDFs, etc. - is also applicable to path tracing. Therefore, we leverage this model to require as few changes to any existing physically plausible shaders as possible.

The key difference here is that the ray tree branching factor at each material interaction must be reduced from the current multiplicity down to a single path each for the direct and indirect light paths.

The following describes how the path tracing mode affects the interaction between shaders and our integrator shadeops. If your shaders perform their own integration in RSL then you will need to update them to limit the number of samples accordingly.

Our goal is to make it as easy as possible to switch back and forth between distribution ray tracing and path tracing modes while having both converge to the same images.

Direct Lighting

When path tracing, each area light source shader should restrict itself to generating a single sample from its generateSamples(). This will be done automatically if the shader uses stdrsl_SampleMgr to determine how many light samples to compute. Otherwise, the shader should check option("RiHider:integrationmode", ...) or stdrsl_ShadingContext::m_PathTracingMode to determine if the renderer is path tracing and then limit itself.

If the material passes any samples to the directlighting() integrator via the materialsamples, then it too should limit itself to a single sample via one of the above methods.

There may also be multiple light sources. In this case, we can control the total number of light samples with Option "shading" "directlightingsamples". The directlighting() integrator will choose one or more light sources at random with a probability proportional to the total power returned by each light's generatePhoton() method. This light will then be the only one used to generate and evaluate direct lighting samples [1].


As an aside, there's a range of options here. The above strategy still produces two transmission rays if applied naively when doing MIS - one each for the light sample and the material sample. Alternatively, we could use Veach's one-sample estimator approach and flip a coin to decide between a light sample or a material sample.

Unfortunately, choosing a light purely according to a static total power is likely to result in a lot of noise since it won't take into account the distance to the light or whether it's under the hemisphere. The benefit is that it's fast and cheap which may be okay for preview.

We could also generate one sample from each light and one sample from the material, evaluate the light samples against the material and vice versa and then choose one with proportion to the product of the radiance and material response. This would produce only one transmission ray, but require more shader invocations.

Alternatively, we could simply take a single sample from each light, do the MIS with the material as usual and be done with it. This will be the default if we make no other changes to directlighting(). However, it could mean a much larger number of transmission rays at each path vertex.

Because the choice of samples takes into account the relative potential contribution from each light at the shading point, using geometric area lights will produce less noise and faster convergence than traditional light sources. For more information about direct illumination with the new geometric area light system, please see the Geometric Area Lights application note.

Indirect Lighting

Just like sampling direct lighting, for indirect lighting each material shader should restrict itself to a single sample of the indirect radiance. Shaders calling the indirectdiffuse(), indirectspecular(), and ray-traced subsurface() functions should restrict themselves to passing only a single sample to these functions.

For indirectspecular() on materials with multiple potential specular paths (e.g. reflection and refraction, or birefringence) only one specular path should be chosen and passed to indirectspecular(). If a shader uses stdrsl_SampleMgr::computeSpecularSamples() to determine the number of samples to use, then this will be automatic. To simplify this, the sample array passed to indirectspecular() may contain additional samples provided that only one sample in the array has a non-zero PDF.

If your shader's lighting() method sums results from multiple integrators then it should be changed to choose one at random. Usually, the probability for each integrator is proportional to its contribution to the material's reflectance. Note that it is not strictly necessary to do this for the path trace mode of the renderer to work. However, failing to do so can result in what is effectively distribution ray tracing, thereby losing the interactivity benefits of path tracing.

Warning: while it is possible to shoot more than one sample of indirect radiance even in path tracing mode, doing so goes against the spirit of path tracing because it can cause an explosion in the number of rays at deeper ray depths. For this reason it should usually be avoided.

Stratified Sampling

Because of the limited ray budget at each shading point when path tracing, it's important that each sample contribute as much information as possible to the estimate of the lighting in the scene. Variance reduction techniques are key and one particularly useful method is stratified sampling.

In particular, for nearby shading points that have been arrived at through similar ray paths, we'd like for their next direct or indirect rays to go in different directions. In practical terms this means that the random numbers used to choose the direction of the next rays should prevent clumping. The randomstrat() shadeop is designed to provide well distributed random numbers.

Calls to randomstrat() that request a single sample, e.g.:

point xi[1];
randomstrat(0, 1, xi);
float xi1 = xi[0].x;
float xi2 = xi[0].y;

will try to ensure a minimum 2D distance between xi for any two shading points that were reached along similar paths. Requests for multiple samples will return a different stratified set for each shading point. Therefore, shaders that rely on randomstrat() to generate a set of canonical samples and then warp them should work equally well for both path tracing or distribution ray tracing.

Likewise, if your shader uses the generateSamplesAS() or generateSamplesEnv() shadeops and requests a single sample, the results should be well stratified between nearby shading points arriving from similar paths.

(If more than one sample is requested, they will be stratified with regard to each other, but not with respect to their similar paths. This is another reason why we recommend using only a single sample for indirectdiffuse(), indirectspecular(), and subsurface().)

Performance Notes

Path-style integration typically sends buckets to the display much more frequently than distribution integration will, making efficient display more important. The best performancefrom PRMan display drivers is had by sending pixels to 'it' over named pipes instead of sockets. For interactive on-line rendering:

prman -dspyserver it scene.rib

The display line in scene.rib must also be 'it', and the 'it' executable must be in your path.