This documentation is intended to instruct developers in the authoring of custom Bxdfs, or shaders. Users should also consult the RixBxdf.h header file for complete details.
The RixBxdf interface defines a shading plugin responsible for creating a RixBsdf from the the ShadingContext and the set of connected patterns. The RixBxdf interface characterizes the light-scattering behavior at positions in a material.
Integrators "drive" Bxdf subclasses by:
- invoking RixBxdfFactory::BeginScatter to obtain a RixBsdf parameterized by the Bsdf's potentially varying input parameters. RixBxdfFactory is expected to invoke EvalParam for those parameters that pertain to the integration request.
- requesting the Bxdf to generate samples of the Bxdf function to facilitate integrating a subset of the general light transport solution with RixBsdf::GenerateSample.
- requesting the Bxdf to evaluate samples associated with a particular phase of integration with RixBsdf::EvaluateSample and RixBsdf::EvaluateSamplesAtIndex.
- requesting the Bxdf to return local light emission by calling RixBsdf::EmitLocal.
- evaluating the geometry presence for each sampling point with RixOpacity::GetPresence.
- requesting the shadow opacity with RixOpacity::GetOpacity.
- there is one instance of a RixBxdfFactory per bound RiBxdf (RIB) request.
- a single RixBxdfFactory may be active in multiple threads simultaneously
- the context for a per-thread execution is signaled via Begin/End. The RixBxdfFactory should stash state in the RixBsdf object and consider that the lifetime of a RixBsdf instance is under control of the integrator. Generally integrators should attempt to minimize the number of live RixBsdf objects but may nonetheless require a large number. For this reason, the RixBsdf instances should attempt to minimize memory consumption and construct/deconstruct costs.
- the primary RixBsdf entrypoints operate on a collection of shading points in order to reasonably maximize shading coherency and support SIMD computation. Integrators rely on the RixBsdf's ability to generate and evaluate samples across the entire collection of points. Should an integrator require multiple samples at the shading points, it will invoke the single-sample method multiple times, taking care to initialize the random number contexts appropriately. Sample evaluation may be performed in a 1 sample all points variant - EvaluateSample, and a 1 point n samples variant via EvaluateSamplesAtIndex). Generation, however, is constrained to 1 sample n points. Evaluation typically has different requirements (e.g. for making connections in a bidirectional integrator), whereas generation typically benefits from being performed all points at once.
To support several integrator-optimization strategies (diverse sampling frequencies, baking, etc.), we characterize a Bxdf as a collection of individually sampled lobes. Integrators may request samples from a subset of lobes and obtain insight into the general characteristics of each component.
Specular and Diffuse
Diffuse lobes represent isotropic scattering of light, the appearance of which is largely independent of viewing (V) direction.
Specular lobes include a wide surface roughness value ranging from wide glossy interactions to mirror-like, the appearance of which is dependent on the viewing direction (V), incidence light direction (L), and surface normal. Perfectly specular lobes (mirror-like) should be marked as discrete.
User lobes allow Bxdfs to provide lighting independent material properties to the integrator for splatting. (User lobes can be used for surface albedo.)
In general, Bxdf shaders must provide three methods to describe surface scattering: GenerateSample, EvaluateSample, and EvaluateSampleAtIndex. Because RenderMan traces rays in batches, these functions operate over multiple points at a time.
In order to maintain physical correctness, Bxdfs are expected to conserve energy and obey Helmholtz reciprocity. Care should be taken so that GenerateSample and EvaluateSample return consistent results. This allows Bsdf plug-ins to be compatible with different rendering techniques such as unidirectional path tracing, bidirectional path tracing, and photon mapping.
GenerateSample is used to generate Bxdf weighted samples whereas EvaluateSample is used to weight samples by the Bxdf function. EvaluateSample is useful for evaluating direct lighting and multiple importance sampling. Also note that Helmholtz reciprocity requires that these functions provide consistent results.
EvaluateSampleAtIndex behaves exactly like EvaluateSample, but only for a specific sample direction index.
Forward and Reverse Pdfs
The RixBxdf interface provides separate scalar pdf values for two shading situations. The forward pdf should account for light moving from the L to V direction whereas the reverse pdf depicts the opposite (V to L). One example use of the reverse pdf is photon mapping. Bxdf plug-ins should provide both values for integrators to use.
Bxdfs that do not scatter light (e.g. PxrConstant) should disable all lobes and set the forward and reverse Pdfs weights to zero.
Bxdfs can emit light by overriding the RixBsdf::EmitLocal method. This emission function is not considered for lighting services and is therefore not importance sampled. Local light emission is intended for low intensity emission or baked/pre-integrated results (e.g. projection mapping).
In RIS, Bxdfs not only compute the scattering of light, but are also allowed to compute the quantities opacity and presence for non-opaque materials. This is done by returning a RixOpacity subclass from RixBxdfFactory::BeginOpacity.
As discussed in the Opacity Revealed appnote, the term opacity has traditionally conflated several notions (including presence). To focus just on presence, we consider the common case of trying to render a leaf. Rather than model it as a mesh of polygons, it's common to represent a leaf as a cheap bilinear patch with an associated mask texture to represent the leaf shape. Where the mask is zero, there is no leaf. We use the term presence rather than opacity to capture this use case; ie the map is a scalar presence map. It's important to note that in this usage, presence maps conceptually have no bearing on light scattering (and vice versa). Where a material is partially present, its light scattering properties do not change - instead, the material simply undergoes "less scattering". Critically: presence is not a means to model transparent surfaces such as glass, because the light scattering and refractive properties of the glass cannot be modeled by presence alone.
When dealing with opaque objects with presence maps, it is usually the case that they are either entirely present or not present at all. In practical terms: the associated presence texture map consists mostly of 1s or 0s. The RIS renderer takes advantage of this by combining presence with probabilistic hit-testing. The RIS renderer interprets the presence value as the probability that we actually hit a surface. The compelling advantage of this approach is that for camera rays and indirect rays, the renderer only needs to actually shade the surface and run the Bxdf when the surface is actually hit due to a non-zero presence, and the renderer does not have to institute a policy of automatic continuation rays. For cases where the intention is to actually model a thin semi-transparent surface, rather than an opaque object, we assert that presence should not be used for this case; instead, this continuation calculation requires explicit computation by the Bxdf in the form of samples generated on the "back" side of the geometry. Even if those samples are generated in the same direction without refraction (i.e. modelling "thin" translucency), we require that these samples be explicitly generated by the Bxdf GenerateSamples method, and in this case the presence is fully 1.
Bxdfs that wish to assert a non-trivial presence must implement two policies: they must respond to the GetInstanceHints entry point with a bit vector which includes the k_ComputesPresence bit, and must respond to the BeginOpacity method with a RixOpacity object which implements the GetPresence method. The RixOpacity object is bound to a shading context, much like a RixBsdf object; GetPresence typically will use this shading context along with any pattern inputs to fill shadingCtx->numPts worth of presence values with a value between 1 and 0, where 1 is fully present, 0 is absent, and any value in between denotes a probabilistic presence for anti-aliasing.
The renderer offers an additional service that may be useful for scenes of high complexity (i.e. forests of leaves). By responding to the GetInstanceHints entry point with a bit vector that also includes the k_PresenceCanBeCached bit, the Bxdf can request that the renderer cache presence values, and thereby avoid re-evaluation of presence values for every ray that intersects the surface. While this may be an efficiency win it also introduces bias due to caching - this biasing may manifest as blurred results due to interpolation from coarse dicing, which would not normally occur elsewhere in RIS. Users of this feature should also be aware that the cache efficiency is determined by the opacitycachememory setting.
In the previous discussion, we've hinted that it's the responsibility of a semi-translucent thin glass-like shader to capture light transport across its surface for radiance queries. When it comes to transmission rays, requiring a similar responsibility is problematic for two reasons:
1. Transmission rays do not bend: they are usually fired to service a transmittance integration over a straight line between two points, and typically this is in service of direct lighting - one of those points is on the surface, the other is on the light. Any light bending behavior due to refraction cannot be trivially handled by a shader; this means that any physically plausible object which requires refraction must be opaque to transmission rays. #. Transmission rays typically outnumber camera and indirect rays, and the cost of running a full shader to obtain a transmission value is worth minimizing due to the number of transmission rays in a typical shot.
Nonetheless, it is very desirable to have a method by which approximate colored shadows can be efficiently produced, as these are usually preferable to the noisy shadows produced by integrating indirect paths. This is where opacity enters the picture. In RIS, the opacity of a surface is restricted solely to transmission rays, and is the colored transmittance straight through the surface without bending.
To see why computing this opacity can be useful, consider the picture shown above. The box encloses both the angel statue and also a volume. The only light source is behind the stained glass window (the right wall of the box). If we were to model the stained glass window as a physically accurate piece of glass, complete with full refraction, then the crepuscular (so-called "god rays") shining through the volume would be inordinately expensive to render because the only lighting that could be considered for the volume is entirely indirect; the glass would have to be an opaque object and transmission rays from the volume would not reach the light source. If we instead model the stained glass window as a thin, non-physically accurate piece of glass with an opacity shader, then transmission rays from the volume can now directly reach the light (running the opacity shader of the glass window along the way to get a colored contribution). This allows the scene to be rendered with the direct lighting optimization, which allows for a much faster render.
When writing a Bxdf for a thin translucent surface, careful consideration should be given as to whether indirect rays should also be fired through the surface if colored opacity is being used. Depending on the circumstances it is quite likely that such rays should not be fired, otherwise the lighting contribution will be doubled.
Bxdfs that wish to assert a non-trivial opacity must implement two policies: they must respond to the GetInstanceHints entry point with a bit vector which includes the k_ComputesOpacity bit, and must respond to the BeginOpacity method with a RixOpacity object which implements the GetOpacity method. The RixOpacity object is bound to a shading context, much like a RixBsdf object; GetOpacity typically will use this shading context along with any pattern inputs to fill shadingCtx->numPts worth of opacity values with a colored opacity value.
Again, similar to presence, the renderer offers a caching service that may be useful for scenes of high complexity (i.e. forests of leaves). By responding to the GetInstanceHints entry point with a bit vector that also includes the k_OpacityCanBeCached bit, the Bxdf can request that the renderer cache opacity values, and thereby avoid re-evaluation of opacity values for every ray that intersects the surface. Again, the same caveats as presence apply: while this may be an efficiency win it also introduces bias due to caching - this biasing may manifest as blurred results due to interpolation from coarse dicing, which would not normally occur elsewhere in RIS. Users of this feature should also be aware that the cache efficiency is determined by the opacitycachememory setting.
Bxdfs are allowed to compute both presence and opacity. In this case, when dealing with transmission rays, both presence and opacity are separately computed, and combined by the renderer into the transmission result across the surface. An optimization note: the shading mode passed to BeginOpacity is a hint as to the renderer's intentions for the RixOpacity object: if the shading mode is k_RixSCPresenceQuery, only the GetPresence method will be called. BeginOpacity need only evaluate the pattern inputs relevant to computing presence. On the other hand, if the shading mode is k_RixSCOpacityQuery, either presence or opacity (or both) will be executed on the object, and pattern inputs relevant to both presence and opacity should be fully evaluated.
RenderMan will search for plugins on demand under the rixplugin searchpath There must be a 1:1 mapping of integrator/pattern/bsdf plug-ins to DSOs. They also they need to share the same name, e.g. the following rib stream would search for a plug-in file named MyLambert.so:
Bxdf "MyLambert" "lambert1" "color tint" [0.5 0.5 0.5]
By convention, arg files (.arg) are used to define shader metadata needed by host applications. This includes parameter names, default values, localization, and GUI hints. Arg files are written in a simple xml format and should be easy to parse.
Bridge specific metadata should also be written to args files. For example, Maya requires nodeid and classification information:
<rfmdata nodeid="1053406" classification="shader/surface:rendernode/RenderMan/bxdf:swatch/rmanSwatch"/>
Note that RenderMan itself queries parameter information using the RixBxdfFactory::GetParamTable method, not by reading args files.
Bxdfs and Roundcurves
If a user wishes to provide fine-grained control over, for example, transmission rays, then the BeginOpacity method on a Bxdf can be employed. See the source of the PxrHair Bxdf for an example. Here, if "transmissionBehavior" is non-zero, then shadow rays will be tinted with the same transmission color that is used in the Marschner specular model for the TT and TRT lobes. Note that we don't include Os in this example for transmission rays, since that term is already accounted for in the stochastic rejection test on the curve primitives themselves.
Similarly, the Bxdf does not need to compute a presence term for non-transmission rays: again, Os is accounted for at the ray tracing level, and the transmission color is accounted for in the Marschner model itself.
It is important to note that this is specific to the case of roundcurves and does not apply to other primitives, including flat curves or curves with user-provided normals and/or displacement.
Migrating from RSL
- Cs is only used by RSL shaders, not RixBsdf plug-ins
- Lighting AOVs (e.g. diffuse) are splat by the integrator, not Bsdf. They can be specified using LPEs.
RixBxdf.h describes the interface that Bxdf plugins must follow and is required by all RixBxdf plugins.