Volume Rendering

Volume Rendering

September, 2009 (Revised March, 2013)

Introduction

RenderMan Pro Server 15 introduces first class volumetric rendering capabilities. In prior releases, volumetrics effects were typically achieved in RenderMan by using techniques such as textured camera-aligned sprites, or by ray marching techniques (for an example, see the application note Atmosphere and Interior Shaders). These approaches, while useful, have several drawbacks, including slow execution speed and lack of integration with other parts of the renderer. With the new volumetric rendering and shading language model support, these limitations have been addressed.


Volume Primitives

RiBlobby Support

To begin with, we need some way of defining a volumetric region of space. In order to do so, we can use the RiBlobby primitive. A new instruction has been added to the list of numeric opcodes supported by RiBlobby. The new opcode, which has the value of 8, takes no arguments. Its appearance anywhere in the list of opcodes indicates that the interior of the RiBlobby - defined to be anywhere in space where the field function of the RiBlobby is greater than 0 - is to be rendered as a volumetric region. Compare this to the usual treatment for a RiBlobby, which defines the surface of the RiBlobby to exist where the field function is exactly 0.

Here is a simple example that renders a volumetric region defined by an additive combination of six RiBlobby spheroid primitives.

##RenderMan RIB
version 3.04
FrameBegin 0
Display "multisphere_volume.tif" "tiff" "rgba" 
Exposure 1 2.2
Format 256 256 1
Projection "orthographic" 
WorldBegin 
AttributeBegin 
Surface "constant" 
Rotate 45 1 1 1
Blobby 6 [8 1005 0 1005 4 1005 8 1005 12 1005 16 1005 20 0 6 0 1 2 3
4 5] [0.55 0.4 0 0 0.55 0 0.4 0 0.55 0 0 0.4 0.55 -0.4 0 0 0.55 0
-0.4 0 0.55 0 0 -0.4] [""] "vertex color Cs" [1 0 0 0 1 0 0 0 1 0 1 1
1 0 1 1 1 0]
AttributeEnd 
WorldEnd 
FrameEnd 

When rendered with the constant shader, the images show the difference between the rendering of the blobby as a surface and the rendering as a volume.

images/figures.volume_rendering/multisphere_surface.png images/figures.volume_rendering/multisphere_volume.png
Blobby rendered as a surface Blobby rendered as a volume

Volumes Using Implicit Field Plugins

RiBlobby provides several basic primitive opcodes (allowing the creation of spheres, ellipsoids, segments, and repelling ground planes), and also provides the ability to augment these opcodes through implicit field plugins which compute arbitrary field functions. By writing an implicit field plugin, we can arbitrarily define a volumetric region, as long as we can express the desired shape as an implicit equation: f(P) > 0. PRMan ships with several useful implicit field plugins. Two of these allow volumes to be shaped as a cube or a cone.

Another plugin allows the use of Maya Fluid Cache files. Here, the volumetric region is defined to be a cube. The plugin computes the value of density, Cs, and other variables inside this cube by interpolating data from the cache file, and provides access to these variables to the shader. To illustrate the use of this plugin, consider the following example, which uses a fluid cache file generated by Maya fluid effects.

FrameBegin 1
        Option "ribparse" "string varsubst" ["$"]
        Option "searchpath" "string procedural" ["${RMANTREE}/etc"]
        Format 512 512 1
        Shutter 0 1
        Display "flame.tif" "tiff" "rgba" 
        Projection "perspective" "fov" [54.4322]
        ScreenWindow -0.5625 0.5625 -0.75 0.75
        Translate 0 0 100
        ConcatTransform [ 0.999935 -0.0043814 -0.0105109 0  
                          0 0.923019 -0.384754 0  
                          -0.0113875 -0.384729 -0.922959 0  
                          0 -2.4846 10.6085 1 ]
        WorldBegin 
                Surface "fire" 
                Translate 0 5 0
                Scale 5 5 5
                MotionBegin [0 1]
                        Blobby 1 [8 1004 0 23 0 2 1] 
                                 [1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 250 10 10 10 0.1 0 0.0446]
                                 ["impl_mfc" "mfc" "flamer.mb_FlameShape.mcfp"] 
                                 "varying color Cs" [] "varying float density" [] 
                                 "varying float temp" []
                        Blobby 1 [8 1004 0 23 0 2 1]
                                 [1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 250 10 10 10 0.1 0 0.0446]
                                 ["impl_mfc" "mfc" "flamer.mb_FlameShape.mcfp"]
                                 "varying color Cs" [] "varying float density" []
                                 "varying float temp" []
                MotionEnd
FrameEnd 

Here, the impl_mfc.so plugin is referenced as the single leaf primitive in a RiBlobby call, and the volume-enabling opcode 8 has been added to the instruction stream. Notice that deformation motion blur is specified here, as it is supported for volumetric rendering. The fire shader used is a simple translation of the Maya fluid effect shading settings:

float inputbias(float x; float bias) {
    return pow(x, -log(0.5 + bias) / log(2));
}

surface
fire(varying float density = 1; varying float temp = 1) {
    float biasedtemp = inputbias(temp, -0.2);
    color incandescence = (1.0 - smoothstep(0.143, 0.857, biasedtemp)) *
        color(20, 7.776, 3.702);
    float biaseddensity = inputbias(density, 0.315);
    if (biaseddensity < 0.15) {
        Oi = 0.9 * smoothstep(0.136, 0.15, biaseddensity);
    } else {
        Oi = 0.9 * (1 - smoothstep(0.15, 0.857, biaseddensity));
    }
    Ci = incandescence * Oi;
}
        

Resulting in this rendered image:

images/figures.volume_rendering/flame.jpg

Yet another plugin provided with PRMan allows the use of brickmaps to define a volume. We illustrate the use of this plugin by a RIB file with a simple torus-shaped cloud-like blobby volume brick map:

Option "ribparse" "string varsubst" ["$"]
Option "searchpath" "string procedural" ["${RMANTREE}/etc"]
FrameBegin 0
  Format 300 300 1
  ShadingInterpolation "smooth"
  PixelSamples 4 4
  Display "blobbyvolumebrickmap_b.tif" "it" "rgba"
  Projection "perspective" "fov" 15
  Translate 0 0 18
  WorldBegin
    Surface "constvol"
    Opacity 2 2 2   # extinction coefficient multipliers
    Rotate 25 1 0 0
    Rotate -15 0 1 0
    Scale 2 2 2
    # Blobby volume brick map: semitransparent torus
    Blobby 1 [8 1004  0 0 0 1 1] [0]
      ["impl_brickmap.so" "blobbyvolumebrickmap.bkm"]
      "varying color Cs" []
      "constant float levelset" [0]
  WorldEnd
FrameEnd

The shader is very simple:

surface constvol ()
{
  Oi = VolumeField * Os; // for RiBlobby-generated volumes (explained below)
  Ci = Cs * Oi;
}

The brick map must be generated with the new brickmake command-line option -addpresence 1. This option adds a new channel to the brick map that describes how large a fraction of each voxel contains data (and, conversely, how much is empty space). This information makes it possible for texture3d() to correctly mix data values with empty values for filter regions that contain a mix of valid volume data and empty space.

Caution: For the presence calculation to be correct, it is important that the point cloud (that the brick map is generated from) has point radius values that correspond (roughly, at least) to the spacing between the points. Also, large and small points should not be mixed in the same region of space. Finally, because the default threshold of a RiBlobby is not zero, we require that threshold be reset to zero using the primitive variable "levelset" attached to the RiBlobby.

Here's the rendered image corresponding to the RIB file:

images/figures.volume_rendering/blobbyvolumebrickmap.jpg

(This image at resolution 300x300 takes around 7 seconds to render on a modern multi-core PC. It uses 9 million lookups in the brick map.) The rib file and shader to generate the point cloud and brick map can be found in the appendix at the end of this note.


RiVolume

PRMan 15.2 introduces a new way of introducing volumes into your scene: the RiVolume geometric primitive.

RiVolume "type" [ bounds ] [ nx ny nz ] ...

The RiVolume primitive specifies a shaped volumetric region of space, centered within user-specified bounds in object space. The shape of the region is specified by the "type" parameter, which can take the string value: "box", "ellipsoid", "cone", or "cylinder". In PRMan 16.0, it can also take the special value of blobbydso, described in more detail below.

The nx, ny, and nz values are used to specify the number of vertex and varying storage class primitive variables, as well as the number of coarse voxels used in the specification of dPdtime based motion blur (described below). The number of vertex and varying storage class variables expected is nx * ny * nz, specified in an order where X varies fastest (from low to high) and Z varies slowest. Vertex variables are interpolated using a tricubic basis; varying variables are interpolated using a trilinear basis. For uniform/constant storage a single value is expected. It is important to note that vertex variables are interpolated using a Catmull-Rom spline. Due to the negative support present in this spline, some vertex variables may fall outside of the expected range.

The RiVolume primitive is a simpler primitive than RiBlobby. In particular, unlike RiBlobby, it does not allow for building of complex shapes using arithmetic operators. However, because of this simplicity, it can also be more efficient, especially when compared to a RiBlobby with a single leaf node. The RiVolume can generally compute field functions more efficiently, which leads to faster determination of interesting regions of space where the volume has effect.

There are other tradeoffs when comparing the two approaches. In versions of the renderer prior to 17.0, RiVolume creates voxels that are aligned in object space, whereas RiBlobby creates voxels that are aligned to the camera. This means that the RiVolume response to depthshadingrate may not be as high as that of RiBlobby, and in special circumstances (no motion blur, no depth of field) it may not hit test as efficiently as RiBlobby. Starting with PRMan 17.0 both primitives create voxels aligned in camera space by default; RiVolume can revert back to object space voxels by turning off raster oriented dicing (i.e. setting Attribute "dice" "rasterorient" [0]).

Unlike RiBlobby generated volumes, RiVolume is also able to take advantage of the radiosity cache. In particular, this means that rays fired by both the transmission() and indirectdiffuse() shadeops are computed at the frequency of the normal REYES shading rate, cached, and reused. This means that computing transmission() results from a RiVolume is expected to be very fast - depending on the caching behavior, perhaps as fast as a shadow map lookup or better. (This assumes that your opacity does not have any ray-direction dependent computation.) Likewise, computing indirectdiffuse() in a RiVolume is also expected to be more efficient than using a RiBlobby, and may even be preferred to point based techniques.

Motion blur with RiVolume

RiVolume supports several motion blur specifications. Transformation motion blur is supported. A basic form of deformation blur can also be specified by shifting the bounds of the RiVolume.

 ##RenderMan RIB
 version 3.04
 FrameBegin 0
 Display "lattice_basic_deform.tif" "tiff" "rgba" 
 PixelSamples 9 9
 Format 256 256 1
 Shutter 0 1
 Projection "perspective" 
 WorldBegin 
 AttributeBegin 
 Surface "constant" 
 Translate 0 0 3
 Rotate 45 1 1 1
 MotionBegin [0 1]
   Volume "ellipsoid" [-2 0 -2 0 -2 0] [2 2 2]
   Volume "ellipsoid" [0 2 0 2 0 2] [2 2 2]
 MotionEnd
 AttributeEnd 
 WorldEnd 
 FrameEnd 

Resulting in the following rendered image:

images/figures.volume_rendering/lattice_basic_deform.jpg

A more interesting special effect is the ability to specify the deformation motion blur of individual coarse voxels. This can be enabled by specifying the special primitive variable "vector dPdtime" directly on a single RiVolume, which may be optionally enclosed within a RiMotionBegin/RiMotionEnd block. In the case that the RiVolume is not enclosed in a motion block, another special primitive variable "float time" must also be directly attached to the volume.

These primitive variables can be arrays. In the case where the volume is not within a RiMotionBegin/End block, the array dimension of time specifies the number of motion samples of the RiVolume, with the values of time being the time of the motion samples (in other words, identical to the values that would be used for RiMotionBegin). The array dimension of dPdtime must be exactly one less than the number of motion samples, and the values specify the motion vectors for the voxels for all time samples except the last. The motion vectors are always interpreted as the full distance that the voxel will travel between two time samples.

vector dPdtime can be of uniform, constant, or varying storage. Vertex storage is accepted, but the motion vector will not be interpolated using a tricubic basis, it will only be trilinearly interpolated.

To illustrate, the following example shows a sphere with only a single coarse voxel. The sphere is blurred with 3 motion samples, resulting in two motion segments (between 0 and 0.5, and between 0.5 and 1). A pair of motion vectors have been specified for each corner of the coarse voxel indicating first a movement in X of one unit, followed by a movement in Y of two units. The length of the second motion vector is twice the length of the first, leading to an accelerating blur for the second motion segment.

 ##RenderMan RIB
 version 3.04
 FrameBegin 0
 Display "lattice_dPdtime.tif" "tiff" "rgba" 
 PixelSamples 9 9
 Format 256 256 1
 Shutter 0 1
 Projection "perspective" 
 WorldBegin 
 AttributeBegin 
 Surface "constant" 
 Translate 0 0 3
 Attribute "shade" "string frequency" "frame"
 Volume "ellipsoid" [-1 1 -1 1 -1 1] [2 2 2] 
    "constant float[3] time" [0 0.5 1]
    "vertex vector[2] dPdtime" [1 0 0 0 2 0 1 0 0 0 2 0 1 0 0 0 2 0 
        1 0 0 0 2 0 1 0 0 0 2 0 1 0 0 0 2 0 1 0 0 0 2 0 1 0 0 0 2 0]
 AttributeEnd 
 WorldEnd 
 FrameEnd 

Resulting in the following rendered image:

images/figures.volume_rendering/lattice_dPdtime.png

Note: support for this feature in PRMan 15.2 was enabled by specification of a scalar valued "vector dPdtime" directly on all copies of the geometry within RiMotionBegin/End blocks, without requiring "float time". For backwards compatibility, this is still supported in PRMan 16.0, but is deprecated, and will only be supported for RiMotionBegin/End blocks with two samples.

Using Implicit Field Plugins with RiVolume

PRMan 16.0 adds support for implicit field plugins used in conjunction with RiVolume. This is enabled by using a URI for the "type" parameter, where the URI scheme is the word "blobbydso", and the remainder of the URI specifies the path to the plugin. (The plugin path will be resolved against Option "searchpath" "procedural".) Float parameters to the plugin can be passed by using a primitive variable with the name "blobbydso:floatargs", which must be a "constant float" array. Similarly, string parameters to the plugin can be passed by using a primitive variable with the name "blobbydso:stringargs", which must be a "constant string" array. The threshold used to determine the limit surface of the plugin will be the default threshold used by RiBlobby; this can be overridden by specifying a value for "constant float blobbydso:threshold".

Primitive variables may be attached to the RiVolume as usual. In addition, for any vertex or varying storage variables, the plugin will be given the opportunity to compute their values.

Also, when using Implicit Field plugins, vector dPdtime attached to the geometry will be ignored in favor of deformation motion blur results computed by the plugin.

In the following example, we specify a RiVolume that uses the impl_cone.so implicit field plugin that ships with PRMan. This plugin accepts either 0 or 7 float arguments; here, we pass 7 arguments specifying that the cone apex is the point (0, 0.75, 1), the center of the base is (0, -0.25, 1), and the radius is 1. (Note that this is slightly contrived example: RiVolume "cone" would generally be the more efficient solution here.)

 ##RenderMan RIB
 version 3.04
 FrameBegin 0
 Option "rib" "string varsubst" ["$"]
 Display "lattice_dsocone.tif" "tiff" "rgba" 
 Format 256 256 1
 Shutter 0 1
 Projection "perspective" 
 WorldBegin 
 AttributeBegin 
 Surface "constant" 
 Volume "blobbydso:${RMANTREE}/etc/impl_cone" [-1 1 -0.25 0.75 0 2] [0 0 0] 
     "constant float[7] blobbydso:floatargs" [0 0.75 1 0 -0.25 1 1]
 AttributeEnd 
 WorldEnd 
 FrameEnd 

Resulting in the following rendered image:

images/figures.volume_rendering/lattice_dsocone.jpg

Users of this functionality should note some tradeoffs when using RiVolume against a single-leaf RiBlobby using the same Implicit Field plugin.

  • Unlike RiBlobby, RiVolume makes no attempt to extract the isosurface; this is particularly significant when it comes to ray tracing and may lead to significant ray traced biasing if the shading behavior of the volume varies considerably at the isosurface.

Authors of Implicit Field plugins intending to use them with RiVolume should also note the following:

  • The bounding box of the plugin (the bbox field usually set in the constructor) and the bounding volume specified with RiVolume will be intersected to determine the actual rendered result.
  • The Range() method of the plugin is used during RiVolume split operations to efficiently discard regions of empty space. Unlike RiBlobby, the Range() method will be handed points that will be derived from a object space bounding box.
  • As is the case for volumetric RiBlobby, the Gradient call is unused.
  • Deformation motion blur can be computed by the plugin. The BoxMotion and MotionMultiple callbacks must be implemented in order for this to work.

Renderer Details

Shading Language Model

In the shading language, the model of execution for surfaces is that the shader is operating on a small surface element, generally with some amount of information local to the immediate neighborhood of that element. Given that information, a surface shader is primarily responsible for computing the radiance of the surface element that reaches the eye.

When the renderer is shading a volume, we are of course no longer dealing with a surface element; instead, the renderer is operating on a volume element. The primary difference between the two is that there is some thickness to the volume element. When rendering a volume element, it is no longer sufficient simply to compute the radiance: the renderer must take into account how the volume influences the light while the light is within the volume. Consider the volume element pictured below, which is intersected by a light ray with incoming intensity Li. The light traverses the volume element for some distance z, and leaves the volume element with outgoing intensity Lo.

images/figures.volume_rendering/diagram_singleelement.png

In typical volumes, some fraction of the light never leaves the volume element along the same direction in which it entered. This may occur due to the light being absorbed by the medium itself, or because the light was scattered in a different direction. Whatever the cause, we refer to this fraction as the extinguished light, or light that has been subject to extinction. In the surface case, the element is infinitesimally thin, so the opacity computed by the surface shader is used as a simple way of modeling the extinction of light through the surface: the incoming light intensity Li is simply multiplied by one minus the opacity to compute the outgoing light intensity Lo.

For volume elements this is not as straight forward, since the thickness of the volume element plays a role in just how much light extinction will take place. Therefore, it is no longer valid to simply multiply the incoming light intensity by an opacity to compute the outgoing light intensity as the thickness of the element must also be taken into account. Instead of computing the opacity, volume shaders are required to compute the extinction coefficient. This quantity is commonly denoted by the Greek letter tau, or τ, in volume rendering literature. Given an incoming light intensity Li, τ can be used to directly compute the outgoing light intensity Lo after taking into account the effect of light extinction by a volumetric element with a length z using the equation:

Lo = Lie-τz

Intuitively, the extinction coefficient can be thought of as the optical density of the volume: the higher the extinction coefficient, the more opaque the volume and the less light that will be transmitted through the volume. It is important to note that, unlike opacity, the extinction coefficient is unbounded and is not required to be less than one. In fact, given the presence of the exponentiation function in the equation above, the only way to model an opaque volume is to have τ approach infinity.

In shading language terms, for volume shading purposes, Oi is the desired extinction coefficient for the volume. Analogous to the way that surface shaders compute a color (Ci) that has been premultiplied by opacity (Oi), volume shaders are required to compute a Ci - in this case the color of the light scattered in the direction of the eye - that has been multiplied by the extinction coefficient - Oi. The relationship of Ci to Oi remains the same, however each represents something slightly different.

In the examples above, we were able to use the standard constant shader to shade a volume. Now we can understand why: Oi is reinterpreted for the volume shading case to be the extinction coefficient of the volume and is assigned the default value of Os (which is 1). The volume render hence depicts a volumetric region with an extinction coefficient of 1. We can easily write a more interesting volume shader by using the noise shadeop to add some interesting texture to the extinction coefficient.

surface noisevol(float frequency = 10; float amplitude = 2)
{
    point Ps = transform("shader", P);
    Oi = float noise(frequency * Ps) * amplitude;
    Ci = Cs;
    Ci = Ci * Oi;
}

Applied to the RiBlobby above, this generates a picture:

images/figures.volume_rendering/multisphere_noise.png

If you are familiar with how ray marching atmosphere shaders work, you may wonder why there is no mention of step size in these shaders. That's because in this model the shader is working on a single volume element, rather than having to be aware of a volume region. (Actually, the SIMD shader execution model means that more than one volume element is being shaded at once, but from the point of view of the shading language this is mostly irrelevant.) Rather than requiring the shader to perform the integration of light through volume elements, the renderer automatically takes care of that for you. Hence, if we have a volume region comprised of three volume elements, in order to compute the correct color that reaches the eye, it suffices for the shader to compute Ci and Oi locally at each volume element V1, V2, and V3 without having to know of each of them. The renderer takes care of the rest. This is illustrated below:

images/figures.volume_rendering/diagram_multipleelements.png

Crucially, this includes knowing the length of the volume element. The renderer decides what the size of the volume element is based on the shading rate, and it is this size that is used along with the extinction coefficient computed by the volume shader to compute the outgoing light intensity of each volume element.

Here is an example of a shader that creates a cloud-like appearance in a volume. Note that the opacity computation part of this shader has a strong resemblance to the shadowedclouds shader presented in Advanced RenderMan (pg. 418). Unlike that shader, however, it is much simpler: it does not need to perform any ray marching. First, we define a volume density function for a cloud-like appearance - we will reuse this function in the rest of the cloud shading examples.

#include "noises.h"

float volumedensity(float noisefreq) {
    extern point P;
    point Pworld = transform("world", P);
    float width = area(Pworld * noisefreq);
    float density = 1 + fBm(Pworld * noisefreq, width, 7, 2, 0.5);
    // Increase contrast
    density = pow(clamp(density, 0, 1), 7.28);
    // Modulate by volume field
    density = VolumeField * density;
    return density;
}

We then use this volume density function in conjunction with a constant volume element color.

#include "cloud.h"

surface cloud_simplelight(float noisefreq = 2.0) {
    Oi = volumedensity(noisefreq);
    Ci = Cs;
    Ci = Ci * Oi;
}

Resulting in this image:

images/figures.volume_rendering/cloud_simplelight.png

While there is a nominally fluffy appearance to this cloud, it is missing some important self-shadowing detail; we will examine this in detail in the following sections.

Note that the volume extinction, and hence opacity, of a volumetric object increases if the object is scaled up. This is an important difference from rendering surfaces, where the color and opacity are independent of scale. However, the volume extinction is (roughly) independent of the shading rate; the shading rate will simply determine the accuracy of the rendering.

Volumetric Grids

The grids that are generated by the renderer for volumetrics have the following properties:

  • for REYES grids generated for the camera using RiBlobby: if the camera is orthographic, the grids are always rectangular lattices. If the camera is perspective, the grids are rectangular lattices after the perspective divide; hence, in camera (and world) space, each voxel is a frustum. Therefore, in the latter case, shaders cannot assume that the voxels have orthogonal sides, and anti-aliasing estimates of spot size should adjust accordingly. Note that filter regions always correctly adjust for the voxel size.
  • for REYES grids generated for the camera using RiVolume: grids are rectangular lattices oriented in camera space by default, or oriented in object space if rasteroriented dicing is disabled. Each voxel is a rectangular prism.
  • for ray traced volumetrics: the grids are triangular prisms.

It is important to note that, as far as the shading environment is concerned, a volumetric grid that intersects the volumetric envelope does not have voxel elements that are truncated by the envelope; the voxels are still regularly spaced as above. Conceptually, truncation by the volumetric envelope takes place after shading. As a result, your shader will also execute on lattice points that are outside the volume, and should not expect to compute only those results that are strictly inside the envelope. The renderer does restrict the number of shader executions outside the volume to a minimum: the renderer will never keep a grid that is entirely outside the volume, and shader execution is restricted to only a neighborhood of points near the envelope sufficient to allow for correct area shadeop execution.

Shading Language Details and Extensions

As shown in the previous examples, when the renderer is shading volume elements it actually uses the current Surface shader, not the Atmosphere, Interior, or Exterior shaders. This may perhaps be confusing, but given the previously described shading language model where the renderer is interested in determining the local properties of a small volume element, the assumptions that lead to the restricted environment for Atmosphere, Interior, and Exterior shader are much less appropriate.

Consequently, all predefined surface shader variables that apply for a Surface execution are also applicable to the shading of a volume element, with the following changes:

N

defined to be (0, 0, 0) inside a volume.

u, v, w, Du(), Dv(), Dw()

Since there is no reasonable parameterization of a volume, u, v, and w are each set to the x, y, and z components of P in object space. Because of this, it is recommended that shader writers avoid the use of Du() and Dv() as estimates of shading spot size. Nonetheless, the renderer provides a Dw() as an analog to Du()/Dv() for computing the rate of change of a variable with respect to w.

dPdu, dPdv, dPdw

Due to the choice of parameterization, dPdu and dPdv are simply unit vectors aligned with the X and Y axis. dPdw is also provided.

VolumeField

For RiBlobby-generated volumetrics, it is sometimes useful to know the value of the field function. For example, one may want to fade the computed opacity to zero at the envelope of the volumetric region. The new predefined variable VolumeField gives the shader access to this quantity. For all shading other than volumetrics, VolumeField is 0. Shaders that execute on volumetrics will find values of VolumeField ranging from less than zero outside the volume to greater than zero inside the volume.

area()

The area shadeop returns the approximate area of a circle with diameter equal to the diagonal of the voxel. This may be a poor approximation of the spot size, particularly if the voxels are highly isotropic due to depthrelativeshadingrate, and is not generally recommended for antialiasing of shaders.

shadow()

The shadow shadeop can be use unmodified for volumes. It will compute the correct filter size to use for either ray traced shadows or for shadow map lookups. For self-shadowing map lookups inside volumes, deep shadow maps are recommended, as standard shadow maps do not carry enough opacity information to represent a volumetric shadow.

filterregion

Volumetrics have full support for filter regions introduced in PRMan 15.0, and their use is highly recommended for correct antialiasing of texture and environment lookups, as well as for estimation of spot sizes for antialiasing. Be careful to note that filterregion(P) may describe a region which is highly anisotropic, depending on the depthrelativeshadingrate attribute. See the following discussion for more details.

bake3d()

Currently, the bake3d() shadeop simply bakes all points on the volume grid regardless of the VolumeField. As mentioned above, some (but not all) points on the volume grid have a negative VolumeField, and bake3d() will bake these points without paying attention to them, even though the renderer's hider itself will never hit test volume elements where the VolumeField is negative. Shader writers may need to be careful about this detail, particularly in areas where a negative VolumeField leads to a negative extinction coefficient.

All shadeops that implicitly compute derivatives (texture() and the like) operate correctly on volumes.

Volumetric Options

VolumePixelSamples xsamples ysamples

In versions of the renderer prior to 18, volumes were always supersampled at a rate determined by the current PixelSamples setting. This could lead to suboptimal hiding of volumes, particularly if a high pixel sampling rate is needed to geometrically antialias fine non-volumetric detail elsewhere in the scene (such as thin hair). Volumes generally do not have such fine detail, and thus do not need such high pixel sampling rates unless they undergo significant depth of field or motion blur.

PRMan 18.0 introduces the VolumePixelSamples RI call, which allows a heterogeneous pixel sampling rate to be used for volumes only. By default, the number of samples taken is identical to that specified for PixelSamples. Explicitly specifying a different (lower) VolumePixelSample rate allows for volumes to be sampled much faster, without affecting the geometric antialiasing of non-volume geometry in the scene (even if that geometry directly intersects volumes). Prior to the pixel filtering stage, the pixel samples taken at heterogeneous rates are blended at subpixel resolution to ensure the final rendered pixel is as accurate as possible (given the inherent limitations in resolving any subpixel interactions between surfaces and volumes at different subpixel times, lens sampling position, etc.). This heterogeneous sampling can be used only with the stochastic hider.

Volumetric Attributes

Several new RenderMan Interface attributes have been extended or added to support efficient volumetric rendering.

Attribute "volume" "float depthrelativeshadingrate" [z]
Attribute "volume" "float depthresetrelativeshadingrate" [z]

Similar to the relativeshadingrate attribute, two new attributes have been added that allow for anisotropic shading rates through the volume. By default, shading rates are isotropic: the shading rate in depth is the same as that for the shading rate perpendicular to the camera or incoming ray and is specified by ShadingRate. If a depthrelativeshadingrate attribute is specified, then this number multiplies the current ShadingRate independently from relativeshadingrate and the product determines the actual shading rate in depth of the volumetric. This can be used to greatly accelerate volumetric rendering.

Note that when this attribute is used the resulting voxels can be very stretched and may be difficult to antialias correctly. In particular, filterregion(P) may describe a highly anisotropic region. Shadeops which handle filterregions directly (e.g. texture()) will generally have no problem with this, but other shadeops which do not (e.g. wnoise()) usually require the conversion of the filterregion to a float - for example, using filterregion->maxsize(). Shaders which use these shadeops may need careful handling to deal with the anisotropy of the volume shading elements.

The depthrelativeshadingrate attribute interacts directly with volumetric refinement strategies as described below.

Attribute "volume" "float deptherror" [z]
The deptherror attribute controls visible point list compression, enabling significant speedups for volumes with high visible point depth complexity, but low actual color complexity.
Attribute "volume" "deptherror" [0.00392]
This number represents the maximum shift in adjacent visible points before they are considered too different to be composited together. In cases where there are no overlapping volumes, or there is no deep shadow map being output, the deptherror can be significantly increased (leading to faster and slimmer renders) since compositing early does not affect the final answer. However, compositing early will increase error for overlapping volumes and deep shadow map output, thus increasing the deptherror is not advised for those situations.
Attribute "volume" "string depthinterpolation" ["constant"|"smooth"]
This attribute is similar to ShadingInterpolation, but controls the interpolation of color on voxels in the dimension parallel to the camera. It is thus an independent control from ShadingInterpolation, as the latter controls the interpolation of color on voxels on the camera plane. The default value is "smooth", which leads to a result akin to performing a piecewise trapezoidal integration of the volume elements along the depth of the volume. If your shader performs its own integration within the volume element along the depth (perhaps by subsampling), it may be more appropriate to perform a piecewise rectangular integration, in which case you would choose to use "constant" for the depthinterpolation.
Attribute "volume" "string[X] refinementstrategies" ["strategy1" "strategy2" ...]

This attribute specifies refinement strategies that are generally used in conjunction with a relativeshadingrate and/or a depthrelativeshadingrate. By default, no refinement strategies are in force. The goal of a refinement strategy is to increase the shading rate in certain areas such that high frequency detail in the volume may be captured without the expense of having to increase the shading rate everywhere. Thus, a refinement strategy generally requires the use of a relativeshadingrate or a depthrelativeshadingrate at the same time. Three refinement strategies are currently supported.

fieldfunction

This strategy only applies if either the relativeshadingrate or depthrelativeshadingrate multiplier is greater than one. When this strategy is active, the renderer begins the process of dicing a volumetric by using the current ShadingRate multiplied by both the relativeshadingrate and depthrelativeshadingrate as the target shading rates. If the resulting grid crosses the volumetric envelope (i.e: the volume field function is less than zero anywhere) the grid is discarded and the target shading rates are multiplied by 0.5, unless the renderer reaches target shading rates equal to or less than that specified by ShadingRate alone, in which case the grid is kept.

In other words: binary subdivision of a grid is performed whenever the volumetric grid crosses the envelope and the grid has not reached the lowest possible shading rate.

The following images help motivate why one would use this strategy. If a volume is rendered with a coarse shading rate, the edges of the volume may not be rendered with enough precision to capture details of the field function. In other words, there is geometric aliasing. Here, a cube is rendered with a noise shader at a ShadingRate of 64, resulting in a shading point count of 43,253.

images/figures.volume_rendering/cube64.png

Rendering the volume at a ShadingRate of 1 captures the fine detail of the edge of the envelope, but greatly increases the number of shading points - now at 11,271,852.

images/figures.volume_rendering/cube1.png

Rendering the same image with:

ShadingRate 1
Attribute "shade" "float[2] relativeshadingrate" [64 64]
Attribute "volume" "string[1] refinementstrategies" ["fieldfunction"]

gives a result that is somewhere in between the previous two results: some of the grids that are entirely within the volume will be diced with a target shading rate of 64, and other grids that get closer to the edge of the cube will be diced with target shading rates of 32, 16, and so on all the way down to a shading rate of 1. The result is a large reduction in shading points: now 1,254,826. Note that while the detail of the envelope has now been captured, the coarse shading rate in the middle of the volume is insufficient to capture the finer detail of the noise. It is recommended that this strategy be used only with initial relative shading rates that are sufficient to capture a low frequency color function in the volume, but may be insufficient to capture a high frequency envelope.

images/figures.volume_rendering/cube_refinement.png

Users of this strategy should be aware that the check for crossing the volumetric envelope involves point sampling. So, with a very high initial relativeshadingrate, some detail of the envelope may be insufficiently sampled and the grids that are generated may be thrown out entirely, even though some subset of the region actually crosses the envelope. This may result in missing areas near the volumetric envelope.

intersectinggeometry

The intersectinggeometry strategy refines the volume near any opaque geometry that penetrates the volume. This is useful if the volume itself is mostly a low frequency function, but the color of the volume is affected by a high frequency impulse caused by a piece of intersecting geometry. The classic example is a lit smoky volume which surrounds a piece of shadow-casting geometry. If the smoky volume is controlled by a low frequency function, we can typically get away with a coarse sampling rate of the volume except near the edges of the shadow umbra, which requires a higher sampling rate to accurately render.

Turning on the intersectinggeometry strategy causes the renderer to automatically refine volumetric grids that are intersected by an opaque piece of geometry. Similar to the fieldfunction strategy, when this strategy is active the renderer begins the process of dicing a volume by using the current ShadingRate multiplied by both the relativeshadingrate and depthrelativeshadingrate as the target shading rates. Any grids intersected by opaque geomtry are discarded and the target shading rates are multiplied by 0.5, unless the renderer reaches target shading rates equal to or less than that specified by ShadingRate alone, in which case the grid is kept.

The location of opaque geometry is determined by shooting transmission rays within the extent of the volume. These rays are fired with transmissionhitmode set to be primitive, so no shading will occur on the geometry during the ray hits; this may be problematic if the determination of opacity determines on shading.

uniformdepth

The uniformdepth strategy simply ensures that the size of steps in depth for the volume are uniformly sized. By default, this is not true when rendering with a perspective camera; the step size in depth is finer closer to the camera.

When using this strategy, be aware that the shading rate in depth will by necessity be pessimistic and almost always lead to over-shading unless a suitable relativeshadingrate has also been set.

Multiple strategies may be specified at the same time. In this case, the size of the string array must also match the number of strategies. For example:

Attribute "volume" "string[2] refinementstrategies" ["fieldfunction" "intersectinggeometry"]``

The set of refinement strategies can be cleared by supplying an empty string array:

Attribute "volume" "string[0] refinementstrategies" []``
Attribute "dice" "float minlength" [0] "string minlengthspace" ["world"]

When using the default dicing metric (which involves projection of voxels to raster space, and comparing the area to the ShadingRate), volumes differ from surfaces in that the number of shading points tends to grow as a function of 1/Z^3, where Z is the distance to the camera; surfaces tend to grow as a function of 1/Z^2. This extra factor of 1/Z means that volumes very close to the camera tend to be much more problematic in terms of shading time and memory.

At the same time, it is often the case that a strict adherence to a projection strategy is counter productive for volumes. There is generally no need to capture fine geometric detail (since the volumes are just a mass of rectilinear voxels), and any sharp shading detail in the volume often tends to be blurred out when very close to the camera - there is no need to finely sample that detail. The following images are two frames from an animation which involves a tracking shot of the camera towards a volume with what appears to be a sharp shadow. As the camera gets closer and enters the volume, the details of the shadow actually become blurrier, and in this render are actually greatly overshaded.

images/figures.volume_rendering/volume_mindice_1.png images/figures.volume_rendering/volume_mindice_2.png

Switching to a dicing strategy based strictly on world space distance can alleviate this problem, but leads to cases where volumes far away are greatly overdiced relative to the camera. The blue and red lines in the graph below demonstrate both the 1/Z^3 behavior of number of shading points generated for the default projection strategy for the aforementioned tracking camera animation, as well as the overdicing that may be incurred with fixed world space dicing far away from the camera.

images/figures.volume_rendering/shadingrate_graph.png

Based on these observations, volumes close to the camera benefit greatly from the minimum dicing metric introduced in PRMan 17.0. When applied to volumes using the projection dicing strategy, this metric ensures that voxel size is computed by the usual projection to raster space metric until one of the voxel dimensions becomes smaller than the specific minimum length, at which point that length is clamped. This allows for volumes to dice at the correct rate far away from the camera, while allowing them to have reasonable shading point count near the camera. While technically this leads to undershading near the camera, this is rarely objectionable for volumes. The yellow line in the graph above shows the graph of the number of shading points generated for the same tracking camera animation with Attribute "dice" "float minlength" [0.001] "string minlengthspace" ["world"].

By default, minlength is not enabled: there is no minimum constraint on dicing rate. Setting minlength to a positive value specifies the minimum length of the voxel, as measured in a space denoted by the "minlengthspace" parameter. This space is allowed to be "world", "object", "camera", or any other space marked by a RiCoordinateSystem call. Setting minlength to a negative value indicates that the minlength is to be computed in a different fashion:

  • For a RiVolume where the "type" is not blobbydso, the minlength is automatically half the size of a voxel measured in object space.
  • For a RiVolume where the "type" is blobbydso (i.e. the volume is defined by an implicit field plugin), the minlength is set to be the return value of the MinimumVoxelSize method from the ImplicitField API, and the dicing rate will be constrained to be less than this length measured in object space.
  • For RiBlobby, if there is a DSO in the DAG defining the blobby, the minlength is set to be the smallest non-zero return value returned by calls to all MinimumVoxelSize methods from the DSOs in the graph implementing the ImplicitField API. If there are no DSOs in the DAG the minlength is zero (there is no minimum dicing rate constraint).

Volume Rendering Applications

Cloud Rendering

Single Scattering

The interaction of volumetrics with light sources plays a critical role in achieving the correct look for volumetrics such as clouds. The simplest model for rendering volumes involves a single scattering of light. As Blinn points out in his paper Light Reflection Functions for Simulation of Clouds and Dusty Surfaces [Blinn1982] single scattering is an appropriate model of illumination for a volumetric medium with low albedo (typical of smoke). The model for a single scattering of light is that a light beam emitted by a light source L is scattered by a volume element V, and eventually reaches the eye at E, as shown here:

images/figures.volume_rendering/diagram_singlescatter.png

This doesn't look much different from the lighting model for a surface, except for one critical difference: the light beam itself must travel through other regions of the volume (shown as the green regions r1 and r2) both before and after being scattered, and therefore the light is itself extinguished by other volume elements between L and V (through r2) and between V and E (through r1). As mentioned in the previous section, when rendering a volume with the scanline algorithm, the integration of the volume elements between V and E is automatically handled by the renderer. However, without some guidance, the renderer will not automatically handle the extinction between L and V.

This guidance is provided by the use of ray tracing or deep shadow maps, and is exactly analogous to the way rendering of surface elements requires the use of ray tracing or shadows in order to correctly deal with objects between the surface element and the light source.

One way of computing the correct incoming light from L to V using ray tracing is as simple as calling transmission() between L and V. When this shadeop is used against a volume, the renderer will automatically attenuate the result against all intermediate volume elements that are encountered along the ray. During this process, the renderer conceptually runs the volume shader at all the intermediate points in order to compute the required extinction coefficients at all points. However, if the renderer actually did this for each and every ray fired, this would not be an efficient solution, and indeed this is true in PRMan 15.0 or when using the RiBlobby primitive. In PRMan 16.0, when using the RiVolume primitive and the radiosity cache, the renderer doesn't actually need to run the shader for each and every ray. Instead, the radiosity cache can be used to store shaded results which will be reused and interpolated for many transmission rays. This makes efficient single scattering on volumes as simple as calling transmission().

Kajiya and Herzen's paper from SIGGRAPH 1984 [Kajiya1984] suggests an alternate approach to the ray tracing strategy based on the observation that, for any given volume element V, the extinction of the light between L and V is constant regardless of E. Kajiya and Herzen suggest storing this direct illumination in a giant array. In RenderMan terms, this is natural to represent via a deep shadow map. A deep shadow map, rendered from the point of view of L and containing the volume region, will bake the accumulated extinction from L to each intermediate volume element into the shadow map. Then computing the attenuated contribution from L to V is as simple as computing the direct illumination from L and modulating by a shadow() look-up in the deep shadow map. This also works when the shadows are cast by a mix of surfaces and volumes.

Taking this into account, it is trivial to perform single scattering of volumes in the renderer. The standard approaches used for shadowing surfaces can be used directly with volumes in order to correctly model this. Taking the cloud shader from before, we can modify it by simply adding an illuminance() statement to integrate over the light sources:

#include "cloud.h"

surface cloud_deepshadowlight(float noisefreq = 2.0) {
    Oi = volumedensity(noisefreq);
    color Cdiff = 0;
    illuminance(P) {
        Cdiff += Cl;
    }
    Ci = Cs * Cdiff;
    Ci = Ci * Oi;
}

Assuming the light source makes the appropriate shadow() or transmission() call, depending on whether deep shadows or ray traced transmission is used, this results in an image like the following:

images/figures.volume_rendering/cloud_deepshadowlight.png

While we've now captured the self-shadowing aspect of the volume, the picture no longer resembles a cloud, it looks more like smoke. We will discuss one possible approach to this problem in the next section.


Multiple Scattering

It turns out that single scattering models are not sufficient to handle volumetric media with high albedo, such as clouds. In such cases, the majority of the scattered light reaching the eye has actually scattered through the media more than once.

images/figures.volume_rendering/diagram_multiscatter.png

This builds upon the single scatter diagram, but shows that to compute the light that reaches the eye E from the volume element V, not only must we consider the scattering of light that V directly received from the light source L, it must also consider the light that V receives from other volume elements V1, V2, etc. Not only this, but if the scattering is isotropic, then these volume elements are not just behind V, they are all around V. So, in an isotropic scatter, in order to compute the light scattered from V towards the eye we must compute not only the direct illumination at V but also the direct illumination everywhere in the volume region first. We then consider how that direct illumination reaches V, while being itself attenuated by the intervening volume elements.

Generally, even though it is a single pass solution, a brute-force ray tracing solution would quickly become prohibitively expensive for even one additional scatter, let alone multiple scatterings. However, in PRMan 16, using the RiVolume primitive and the radiosity cache, this expense is mitigated to a great deal since the shading results can be cached and reused for many rays. Hence, we can express single bounce scattering simply by using the indirectdiffuse shadeop and passing in a 0-length normal, along with ensuring that the volume is visible to diffuse rays:

#include "cloud.h"

surface cloud_indirectdiffuse(float samples = 16.0;
                   float noisefreq = 2.0;
                   float interpolate = 1) {
    Oi = volumedensity(noisefreq);
    color Cdiff = 0;
    illuminance(P) {
        Cdiff += Cl;
    }
    uniform float d=0;
    rayinfo("depth", d);
    if (d == 0 && Oi[0] >= 0 && Oi[1] >= 0 && Oi[2] >= 0) {
        Cdiff += indirectdiffuse(P, normal(0, 0, 0), samples);
    }
    Ci = Cs * Cdiff;
    Ci = Ci * Oi;
}

This shader results in the following image. Note the significant change in overall brightness caused by just a single bounce of indirect light. The speed and quality of this render is highly dependent on the number of samples fired by indirectdiffuse(), but since indirectdiffuse in a volume tends to be a low frequency function, the number of samples can be set very low (this render used just 16 samples shot over a full sphere).

images/figures.volume_rendering/cloud_indirectdiffuse.png

Even with the radiosity cache, ray tracing can be very expensive for volume work. To deal with this, as the renderer has successfully used point-based methods for global illumination, so too can we use these methods for the multiple scattering problem in volume rendering. To that end, ptfilter has a volumecolorbleeding filter that computes color bleeding between volumes (and between volumes and surfaces).

First we bake the direct illumination. Generally, this illumination should be baked at a lower frequency than normal (i.e. using a coarser shading rate). The indirectdiffuse results which tend to have lower frequency will generally not benefit significantly with a high density point cloud and will be much slower to compute.

#include "cloud.h"

surface cloud_bake(string filename="";
                   float noisefreq = 2.0;
                   float interpolate = 1) {
    Oi = volumedensity(noisefreq);
    color Cdiff = 0;
    illuminance(P) {
        Cdiff += Cl;
    }
    uniform float d=0;
    rayinfo("depth", d);
    Ci = Cs * Cdiff;
    Ci = Ci * Oi;
    if (d == 0 && Oi[0] >= 0 && Oi[1] >= 0 && Oi[2] >= 0) {
        float area = area(P, "dicing");
        bake3d(filename, "_area,_radiosity,_extinction,Cs", P, N,
               "interpolate", interpolate,
               "_area", area, "_radiosity", Ci, "_extinction", Oi, "Cs", Cs);
    }
}

Then we run the ptfilter volumecolorbleeding pass:

ptfilter -filter volumecolorbleeding -clamp 1 -sortbleeding 1 \
    cloud_radiosity_interpolate.ptc cloud_scattered.ptc

Finally, we can read the indirect diffuse illumination back in and render that directly (note that the indirect diffuse computed by ptfilter must be multiplied by the extinction).

#include "cloud.h"

surface cloud_indirectlight(float noisefreq = 2.0; string filename="") {
    Oi = volumedensity(noisefreq);
    Ci = 0;
    if (filename != "") {
        color indirectdiffuse = 0;
        texture3d(filename, P, N, "_indirectdiffuse", indirectdiffuse);
        indirectdiffuse *= Oi;
        Ci = indirectdiffuse;
    }
    Ci = Ci * Oi;
}

Or add the indirect diffuse illumination to the single scattered illumination to create a final pass. For the final pass, a fine shading rate can be used, and indeed is necessary for the look of the direct illumination and single scatter. The coarser results of the diffuse illumination will simply be interpolated over the finer voxels of this final pass.

#include "cloud.h"

surface cloud_scatterlight(float noisefreq = 2.0; string filename="") {
    Oi = volumedensity(noisefreq);
    color Cdiff = 0;
    illuminance(P) {
        Cdiff += Cl;
    }
    Ci = Cs * Cdiff;
    if (filename != "") {
        color indirectdiffuse = 0;
        texture3d(filename, P, N, "_indirectdiffuse", indirectdiffuse);
        indirectdiffuse *= Oi;
        Ci += indirectdiffuse;
    }
    Ci = Ci * Oi;
}

We can perform multiple bounce scatter by passing the -bounce parameter to ptfilter. Five scatters can be performed by:

ptfilter -filter volumecolorbleeding -clamp 1 -sortbleeding 1 -bounces 5 \
    cloud_radiosity_interpolate.ptc cloud_scattered.ptc

The effects of multiple bounces can be seen below.

images/figures.volume_rendering/cloud_indirectlight.png images/figures.volume_rendering/cloud_scatterlight.png
Indirect diffuse illumination only after one bounce Combined illumination after one bounce
images/figures.volume_rendering/cloud_indirectlight_2bounce.png images/figures.volume_rendering/cloud_scatterlight_2bounce.png
Indirect diffuse illumination only after two bounces Combined illumination after two bounces
images/figures.volume_rendering/cloud_indirectlight_5bounce.png images/figures.volume_rendering/cloud_scatterlight_5bounce.png
Indirect diffuse illumination only after five bounces Combined illumination after five bounces

Instead of using the external ptfilter utility, the point-based indirectdiffuse() shadeop can also compute volume color bleeding. Just set the parameters "pointbased", "clamp", and "volume" to 1. The computations done when "volume" is 1 is a superset of the regular surface calculations and can handle volume-to-volume, surface-to-volume, volume-to-surface and even surface-to-surface color bleeding. The input point cloud can have a mix of points from volumes and surfaces, ie. points with and without a valid surface normal. The only reason to not always set "volume" to 1 is that the surface-to-surface computations are a bit slower than when "volume" is 0.


Volumetric Lighting

The volumetric rendering techniques presented here are well suited for creating visible beams of light shining through an environment (so-called "God rays"). An example of older techniques involved the use of an Atmosphere shader (as described in Atmosphere and Interior Shaders) - this suffered from several drawbacks including the need to create a piece of geometry in the background, and the need to perform ray marching and integration in the shader.

Volumetric lighting typically involves light shaders and shadow map lookups, both of which are computationally expensive. It is wise to limit this expense as much as possible. An obvious technique is to limit the evaluation volume. A simple spotlight shines energy only in a cone shaped volume, so enclosing that region of space exactly with a cone shaped RiBlobby eliminates any needless shading overhead. To that end, a new implicit DSO that models a cone has been added to the standard distribution. It takes either no parameters or seven. With no parameters, the volumetric region created has its apex at (0, 0, 1), its base on the XY plane, and a base radius of 1.0. Users of the RenderMan interface will recognize this as the standard orientation of a RiCone and may already have tools that can transform this volumetric cone in a similar manner. When seven float parameters are specified, the cone interprets them as:

apex.x apex.y apex.z center.x center.y center.z radius

where center is the center of the base of the cone. This second version of the cone is better suited for blending with other blobby operations.

Another technique to limit computation expense is to use a depthrelativeshadingrate in conjunction with the fieldfunction and intersectinggeometry refinement strategies. It is often the case that the contents of the volumetric region have a noise-based density function with low frequency that can be adequately sampled with a coarse depthrelativeshadingrate. However, the edge of the envelope is typically a very high frequency impulse: the light delivers no energy outside the envelope and the transition region may be very sudden - in this case, a coarse shading rate is insufficient to capture this transition. The fieldfunction refinement strategy is well suited for this purpose: it ensures that the contents entirely within the volume have a coarser shading rate, and the areas near the envelope are shaded with a finer rate. Finally, if there is a piece of shadowing geometry in the middle of the volume, the introduction of the shadowing function near the opaque object introduces another source of high frequency impulse; in this case, the intersectinggeometry ensures that enough shading samples are taken near the opaque object to capture the details of the shadow.

Building upon these techniques and combining them with the use of a deep shadow map, we can use the following shader, based somewhat upon the smoke shader from appnote 20, to render once more the familiar image of a ball sitting in a dramatically lit smoky volume. Note the use of calculatefilterregion to compute an appropriate filter width to pass to the wnoise shadeop. As wnoise currently only performs isotropic filtering, but we would like to be able to use this shader on highly anisotropic voxels, we choose to ensure that we correctly filter in the plane parallel to the camera by passing the computed minsize, even though this results in under-filtering in the depth dimension. We can avoid the under-filtering in depth by using the maxsize, but this will lead to objectionable overfiltering results in the plane parallel to the camera where the result is much more noticeable. This is one example of how shader writers will need to be careful when using a depthrelativeshadingrate to create highly anisotropic voxels. Future versions of the renderer may support anisotropic filtering in wnoise, which will remove this concern (at least for this shadeop).

surface smokevol(float density = 0.1;
                 float use_noise = 1;
                 float octaves = 1, freq = 1, smokevary = 1;
                 float lightscale = 15;
                 float opacityonly = 0;
                 ) {
    if (use_noise != 0) {
        point Psmoke = transform("shader", P) * freq;
        float fw = calculatefilterregion(P)->minsize();
        float smoke = wnoise(Psmoke, fw,
                             "lacunarity", 1.8,
                             "projectvector", normal(0, 0, 0),
                             "octaves", octaves);
        Oi = density * smoothstep(-1, 1, smokevary * smoke);
    } else {
        Oi = density;
    }

    if (opacityonly == 1) {
        Ci = density;
    } else {
        color li = 0;
        illuminance (P, vector(0,0,1), PI) {
            li += Cl;
        }
        Ci = lightscale * li * Oi;
    }
}

The RIB file - note the use of a deep shadow, relative shading rate, and volumetric refinement strategies:

##RenderMan RIB
version 3.04
FrameBegin 1
        Option "ribparse" "string varsubst" ["$"]
        Option "searchpath" "string procedural" ["${RMANTREE}/etc"]
        PixelSamples 4 4
        Format 512 512 1
        Display "volumeluxo.tif" "tiff" "rgba" 
        Clipping 0.1 1000
        Projection "perspective" "fov" [54.4322]
        ScreenWindow -1 1 -1 1
        ConcatTransform [ 0.999935 -0.0043814 -0.0105109 0  8.84567e-11 0.923019 -0.384754 0  
                -0.0113875 -0.384729 -0.922959 0  6.88667e-09 -2.4846 10.6085 1 ]
        WorldBegin 
                ShadingRate 1
                TransformBegin 
                        Attribute "identifier" "string name" ["spotLightShape1"]
                        Transform [ 1 0 0 0  0 0 -1 0  0 1 0 0  0 8.0842 0 1 ]
                        LightSource "luxospotlight" "luxospotlight" "float coneAngle" [0.523599] 
                                "float penumbraAngle" [0.0872665] "string rman__dmap" 
                                ["volumeluxo_shadow.tex"] "string rman__CausticMap" [""]
                TransformEnd 
                AttributeBegin 
                        Attribute "volume" "string[1] refinementstrategies" ["fieldfunction"]
                        Attribute "shade" "float[2] relativeshadingrate" [1 10]
                        Surface "smokevol" "float density" [0.085] "float use_noise" [1] 
                                "float octaves" [3] "float freq" [0.66] "float smokevary" [1] 
                                "float lightscale" [1] "float opacityonly" [0]
                        Blobby 1 [8 1004 0 7 0 0 0] [0 8.0842 0 0 0 0 5.6606] ["impl_cone.so"] 
                AttributeEnd 
                AttributeBegin 
                        Translate 0 2.69182 0
                        Rotate 90 1 0 0
                        Surface "ball" "float d" [0.5] "float a" [0.5] "float roughness" [0.15686] 
                                "color highlightcolor" [1 1 1]
                        Sphere 1 -1 1 360 
                AttributeEnd 
                AttributeBegin 
                        ConcatTransform [ 2.22045e-16 0 1 0  0 1 0 0  -2 0 4.44089e-16 0  0 0 0 1 ]
                        Surface "woodfloor" "color woodcolor" [0.262821 0.105516 0.0312505] 
                                "float woodr" [0.3]
                        NuPatch 4 4 [0 0 0 0 1 1 1 1] 0 1 4 4 [0 0 0 0 1 1 1 1] 0 1 
                                "Pw" [-5.04383 0 5.74162 1
                                -1.68128 0 5.74162 1 1.68128 0 5.74162 1 5.04383 0 5.74162 1
                                -5.04383 0 1.91387 1 -1.68128 0 1.91387 1 1.68128 0 1.91387 1
                                5.04383 0 1.91387 1 -5.04383 0 -1.91387 1 -1.68128 0 -1.91387 1
                                1.68128 0 -1.91387 1 5.04383 0 -1.91387 1 -5.04383 0 -5.74162 1
                                -1.68128 0 -5.74162 1 1.68128 0 -5.74162 1 5.04383 0 -5.74162 1]
                AttributeEnd 
        WorldEnd 
FrameEnd 

And the rendered result:

images/figures.volume_rendering/volumeluxo.png

Caveats, Limitations, Future Directions

Ray tracing of a volume (i.e. calling transmission from the surface object of a shader, and having transmission rays go through a volume on its way towards a light source) is fast. The renderer will execute the volume shader in parallel along points on the transmission ray, and the actual cost of hit determination is very low.

On the other hand, launching rays from a volume can be very slow, particularly if the volume shader is expensive (even if that's just a single call to noise). Wherever possible, the use of deep shadow techniques should be preferred for techniques such as volume self-shadowing. This expense is greatly reduced in PRMan 16 through the use of RiVolume and the radiosity cache (automatically enabled), but this only applies to rays fired by transmission and indirectdiffuse; rays fired by gather or indirectspecular can still be very expensive.


Frequently Asked Questions

Q: Why use surface shaders for volumes? Why not e.g. atmosphere shaders?

A: It is somewhat counter-intuitive that surface shaders are assigned to the new volume primitives. However, for our shading language model (where the renderer needs the local properties of a small volume element), the restricted execution environments for atmosphere, interior, and exterior shaders are inappropriate.

Q: What are the physical meanings of Cs, Oi, and Ci?

A: Cs is the scattering albedo of the volume; it is a color where each component has a value between 0 and 1 -- just as for surfaces. In physical terms, the albedo A is defined as the scattering coefficient σ divided by the extinction coefficient τ: A = σ/τ.

Oi is the extinction coefficient τ of the volume; it determines how much any light traveling through the volume gets attenuated. It is a color since the extinction can be different for different wavelengths. The extinction coefficient is the sum of absorption and scattering coefficients: τ = α + σ. Unlike for surfaces, the extinction coefficients, and hence Oi, can be larger than 1.

Ci is light contribution per unit length from the volume. In a volume without light emission it can be computed as Ci = σ * Li, where σ is the scattering coefficient and Li is the incident illumination of the volume. From the definition of albedo, the scattering coefficient σ can also be expressed as A * τ, so we get Ci = A * τ * Li. With our definitions of Cs and Oi this can be rewritten as Ci = Cs * Oi * Li. This expression is the light contribution per unit length in the volume due to scattering of incident illumination.

In a volume with light emission we have to add a term for emitted light: Le. We have to multiply Le by the absorption coefficient α. Why the absorption coefficient? This is customary in volume rendering and has to do with energy preservation. The absorption coefficient can be expressed in terms of albedo and extinction coefficient as follows: α = τ - σ = (1 - σ/τ) * τ = (1 - A) * τ. So the light contribution per unit length due to emission in the volume is (1-Cs) * Oi * Le.

All in all we can express Ci in the following equivalent ways:

Ci = σ * Li + α * Le

Ci = A * τ * Li + (1 - A ) * τ * Le

Ci = Cs * Oi * Li + (1-Cs) * Oi * Le;

This derivation provides the rationale for why the color (albedo) Cs should be multiplied by Oi (the extinction coefficient) in shaders for volume rendering - just as it is customary for surface rendering. As a bonus, the constant shader (which sets Ci = Cs * Os) also works without modification.


References

[Blinn1982]Blinn, J. F. 1982. Light reflection functions for simulation of clouds and dusty surfaces. Computer Graphics (Proceedings of SIGGRAPH), 16, 3 (Jul. 1982), 21-29.
[Kajiya1984]Kajiya, J. T. and Von Herzen, B. P. 1984. Ray tracing volume densities. Computer Graphics (Proceedings of SIGGRAPH), 18, 3 (Jul. 1984), 165-174.

Appendix: Creating the torus brick map

Here is the rib file for generating the cloud-like torus point cloud from the section Volumes Using Implicit Field Plugins:

FrameBegin 0
  Format 200 200 1
  ShadingInterpolation "smooth"
  PixelSamples 4 4
  ShadingRate 4   # coarse shading gives sparse baked points
  Display "blobbyvolumebrickmap_a" "it" "rgba"
  DisplayChannel "color Cs"
  DisplayChannel "color Os"   # extinction coefficients
  DisplayChannel "float myarea"
  Projection "orthographic"
  Translate 0 0 10
  WorldBegin
    # Specify view-independent dicing (not strictly necessary since the
    # square is facing the screen, but avoids a warning message)
    Attribute "dice" "rasterorient" 0
    # Atmosphere shader bakes point cloud of volume texture
    Atmosphere "baketorusvol2"
      "string filename" "blobbyvolumebrickmap.ptc"
      "string displaychannel" "Cs,Os"
      "float b" 0.27   # minor axis
      "float depth" 0.5   # 0.5 for half torus, 1.0 for entire torus
      "color albedo" [0.8 0.8 1.0]   # bluish white
      "float frequency" 1 "float steplength" 0.025
    # Back wall -- necessary to execute the atmosphere shader
    Surface "constant"
    Translate 0 0 0.5
    Polygon "P" [-1 -1 0  1 -1 0  1 1 0  -1 1 0]
  WorldEnd
FrameEnd

The surface shader looks like this:

volume
baketorusvol2(string filename = "", displaychannel = "";
              point center = (0,0,0);
              float a = 0.75; // major radius
              float b = 0.25; // minor radius
              color albedo = 1;
              float frequency = 3, depth = 1, steplength = 0.1)
{
  color scatter = albedo, ext; // scattering and extinction coefficients
  point Pfront, Pcurrent, Pshad;
  normal N0 = 0;
  vector In = normalize(I);
  vector dx = steplength * In, offset;
  vector diff;
  float d, x, z;
  uniform float steps = depth / steplength, step = 0;
  float mpa = area(P, "dicing"); // area corresponding to micropolygon

  Pfront = P - depth * In;
  offset = 0;
  Pcurrent = Pfront + offset;

  // March along ray through the volume
  while (step < steps) {
    Pshad = transform("shader", Pcurrent);
    d = length(Pshad - center);
    x = Pshad[0];
    z = Pshad[2];
    // Select points inside torus
    if ((d*d - a*a - b*b) * (d*d - a*a - b*b) < 4 * a*a * (b*b - z*z)) {
      // Compute marble texture
      float sum = 0, freq = frequency, i;
      for (i = 0; i < 6; i = i + 1) {
        sum = sum + 1/freq * abs(0.5 - noise(4 * freq * Pshad));
        freq = 2 * freq;
      }
      ext = 4 * sum + color(0.1);   // extinction coefficients
      // Write the marble texture data point to a point cloud file
      bake3d(filename, displaychannel, Pcurrent, N0,
             "Cs", albedo, "Os", ext, "_area", mpa);
    }
    // Advance one step
    step += 1;
    Pcurrent += dx;
  }

  Ci = albedo * ext;   // constant illumination: scatter = albedo
  Oi = ext;
}

And finally, the command to generate the brick map is:

brickmake -addpresence 1 blobbyvolumebrickmap.ptc blobbyvolumebrickmap.bkm