Using Displacement Shaders

Using Displacement Shaders

July, 1990

One of the most interesting, powerful and unique features of PhotoRealistic RenderMan is the ability to displace the surface geometry during shading. This allows the modeling of true dents, wrinkles and other rough features which are only approximated by other techniques such as bump maps. Since the geometry is actually moved by the displacement shader, not only the shading is changed, as in bump mapping, but the silhouette is changed and parts the surface can occlude other parts, just as they do in “real life.” Neither of these effects can be achieved by bump mapping.

However, displacements are sometimes tricky to use, and often lead to unexpected artifacts or performance problems. In particular, the fact that displacements are made to the surface geometry during the shading means that all of the code earlier in the rendering pipeline can only guess what the displacement might eventually be like. By following the hints in this note, it should be possible to eliminate most of the artifacts which can crop up due to incorrect guesses.

WRITING A DISPLACEMENT SHADER

Writing a displacement shader is very similar to writing a normal surface shader. The language is, of course, the same, and nearly all of the same global information is available (e.g., surface position P and normal N, parametric space information u and v, texture coordinates s and t, primitive vertex variables, etc.). The only things that are missing are things which are clearly only important to color calculation (surface and light colors and light direction). The real difference is that surface shaders are responsible for calculating a color for a point on the surface. Displacement shaders are responsible for figuring out where the surface is in the first place!

Move P

When a displacement shader is called, it is passed some position on the surface of a primitive, P. The job of the shader is to move that surface point into its new location. For example, one dreadfully simple displacement shader might shift the surface over one unit in the x-direction.

displacement shift() {
	P = P + point (1, 0, 0);
}

A more realistic example would be a displacement shader that moved the surface up and down in a sinusoidal wave.

displacement sinewave() {
	P = P + point (0, sin(2*PI*xcomp(P)), 0);
}

It would be possible, of course, to change P to something which is totally unrelated to the old P, but this is really strange and should not be done. What is most common is to move the points along their normal vectors, so that curved or polygonal surfaces move in or out in a symmetric way.

displacement knobbies() {
	float a;
	/* Push up an array of little sinewave lobes */
	a = sin(2*PI*3*s) * sin(2*PI*3*t);
	if (a > 0.0)
		P += normalize(N) * a;
}

Recalculate the Normal Vector

The last thing that a displacement shader almost inevitably needs to do is recalculate the surface normal, N. When the shader changes the shape of the geometric surface, this also changes the directions of the surface's tangents and normal vectors. If the shader does not recompute these values, they will keep their old unbumped values. Sometimes this is okay, but usually it's not what you want. The appropriate shader function to call is calculatenormal.

One subtle mistake that is sometimes made is to call calculatenormal inside of a conditional which moves the point. This is a mistake because when a point moves, it changes the shape of the surface in a neighborhood all around the point. Thus, the normal vectors of neighboring points have to be corrected, too, not just the normal vectors of the points that move. The easiest way to get around this is to get in the habit of calling calculatenormal as the very last thing, outside of all of the conditionals.

displacement knobbies() {
	float a;
	/* Push up an array of little sinewave lobes */
	a = sin(2*PI*3*s) * sin(2*PI*3*t);
	if (a > 0.0)
		P += normalize(N) * a;
	/* Recalculate normal vector */
	N = calculatenormal(P);
}

Notice that calculatenormal calculates the geometric surface normal of the new surface. If the normal vectors which are desired in the shading calculations are Phong-interpolated normals (supplied on the vertices of polygonal data), calculatenormal will not compute a new interpolated value for the (now no-longer flat) surface. It is quite tricky to calculate this value correctly, and those methods are beyond the scope of this note.

Using the Right Space

Up to now, the points and vectors that have been used are in the default shading coordinate system. In PhotoRealistic RenderMan this is camera-space, the coordinate system with the camera at the origin looking out over the positive z axis. This means that the lengths of all of our displacement vectors are in camera-space units. Thus, the displacements of one primitive will be the same size as displacements of other primitives. Sometimes, this is what you want. Other times, you want displacements which scale with the object, so that smaller versions of the same object have smaller displacements.

In general, it is a good idea to code your displacement shaders (and other shaders for that matter) so that they calculate lengths in some known useful coordinate system, rather than let it default. That way, changes in the camera or global scale or other transformations will not have unexpected side-effects on the size of your displacements. Two of the best coordinate systems to use are world-space (which gives you primitive-independent lengths) and object-space (which scales with the object).

displacement knobbies() {
	float a;
	/* Transform P and N to object space */
	P = transform("object", P);
	N = transform("object", N + point "object" (0,0,0));
/* Push up an array of little sinewave lobes */
	a = sin(2*PI*3*s) * sin(2*PI*3*t);
	if (a > 0.0)
		P += normalize(N) * a;
	/* Transform P back to current space */
	P = transform("object", "current", P);
	/* Recalculate normal vector */
	N = calculatenormal(P);
}

Amplitude Control

It is always a good idea to make your shaders as general as possible, so that you don't have to rewrite it every time you need a slightly new effect. One of the simplest things you can do to make it much easier to control your displacement shaders is to include an amplitude control as a parameter to the shader. This will make it very easy to fine-tune the sizes of the bumps to get just the right effect. It will also make it much easier to set up the correct displacement bounds, as described below.

displacement knobbies(float amplitude = 1.0;) {
	floa
	/* Push up an array of little sinewave lobes */
	a = sin(2*PI*3*s) * sin(2*PI*3*t);
	if (a > 0.0)
		P += normalize(N) * a * amplitude;t a;
	/* Transform P and N to object space */
	P = transform("object", P);
	N = transform("object", N + point "object" (0,0,0));
	/* Transform P back to current space */
	P = transform("object", "current", P);
	/* Recalculate normal vector */
	N = calculatenormal(P);
}

Calling the Displacement Shader

Once you have the displacement shader, you apply it to your primitive objects using the RiDisplacement subroutine (the Displacement directive in a RIB stream). This call is used just like the RiSurface subroutine: it takes the name of the shader and a parameterlist, it is a primitive attribute and so is on the attribute state stack, and it is applied to all subsequent primitives until it is turned off. When you want to turn the displacement mapping off, you can either pop the attribute stack (with RiAttributeEnd) or you can call RiDisplacement (RI_NULL,RI_NULL).

AVOIDING ARTIFACTS

Displacement shaders are evaluated and applied to the surface of objects during the shading stage of PhotoRealistic RenderMan. This is after the primitives have been diced into the tiny micropolygons which get shaded, but before those micropolygons have been sent to the hidden-surface algorithm. In fact, the displacement shader is the first shader that is evaluated on a surface, so that the subsequent light, surface and atmosphere shaders are operating on the correctly moved data. The problem is, of course, that the various geometric operations which operate on the original surface geometry before shading (for example, bounding boxes and size estimation) don't know how much (if at all) the surface will move under the displacement. Therefore, the mechanisms that exist to eliminate geometric artifacts like polygonal silhouettes are occasionally outsmarted by the displacement.

Here are some of the artifacts that can be seen, and some hints on how to avoid them. Each of the attributes described below is explained more fully, with examples, in the PhotoRealistic RenderMan User's Manual and the PhotoRealistic RenderMan Tutorial. If the technical terms, such as micropolygons and grids, are confusing or unfamiliar, don't worry. These forays into the rendering algorithm can be ignored at will.

Displacement Bounds

The first, and most obvious artifact occurs when the displacement moves the surface outside of its normal bounding box (that is, the bounding box which is computed without any knowledge of the eventual effects of the displacement). For example, a displacement that adds one to the radius of a sphere will pretty obviously move large regions outside the now far-too-small bounding box. PhotoRealistic RenderMan doesn't pay any attention to an object until it reaches a pixel inside an object's bounding box, so things that are displaced upward out of the box will get missed, leaving large holes in the object.

This effect is combatted by the RiAttribute ("displacementbound", "sphere", (RtPointer)&bound, "space", (RtPointer)&space, RI_NULL) call. This call expands the size of the bounding box of an object to handle exactly this problem. The displacement bound is a single floating-point number which specifies the maximum radial displacement (maximum amplitude, if you like) that the displacement will ever generate on any point on the surface. This length is defined to be in units of the named space (defaulting to "object"). It is important that the units and space of this attribute call match the units and space used in the displacement shader itself. Remember that if the shader asks for displacement without specifying a coordinate system, it means camera-space units, so "camera" should be the space used in the RiAttribute("displacementbound") call. It is highly recommended that both the shader and the RiAttribute call express displacements in shader space units.

If you make the displacement bound too small, the object will still fall outside the bounding box, and pieces may still disappear. If you make the displacement bound too large, then the bounding box will be too big, and PhotoRealistic RenderMan will do far too much work processing the primitive and use up lots of memory storing the pieces that were done too early. Now we can see why getting the space right and getting reasonable amplitude control are so helpful: because with both of those in place, figuring out the correct bound is much easier!

Shading Rate

Imagine a displacement shader which moves a single shading sample a large distance away from its neighbors. Since displacements happen after dicing into micropolygons, this means that a single micropolygon vertex (actually, the corner shared by 4 micropolygons) is moved. It should be obvious that this means that the four affected micropolygons are now much larger than they used to be. (They will probably be long and spiky.) In fact, they are now much larger than the shading rate would normally allow. This means that some rather large number of pixels will be covered by these few oversized micropolygons, and the colors will be more or less constant over those sections. The shading will look blocky, just as if the shading rate is too large, and textures might be blurry. This is particularly visible if the new normal vectors catch the light differently than their old neighbors, because the shading difference between neighbors will look like stripes instead of the nice smooth gradients you are used to.

The problem is the large discontinuity in displacement between adjacent points on the surface. The best way around this problem is to change the shader to smooth out the transitions, so that the surface can fade in the change. If the sharp jump is exactly the effect that is wanted, the only solution to the shading blockiness is to set the shading rate very low and turn on Gouraud shading. This can help overcome most of the visible effects, but the penalty is rendering speed.

Backfacing Facets

Sometimes, an extreme displacement of the sort similar to above will cause individual micropolygons to bend (like a sheet of paper with one corner lifted). When looked at from certain angles, part of the micropolygon might be backfacing while the other part is frontfacing. If this happens, PhotoRealistic RenderMan may get confused and backface cull the entire micropolygon. This will leave tiny holes along the sides of steeply sloped displacements.

The best way around this is to prevent backface culling on the object with the RiSides(2) call. In this case the micropolygon will be shaded as though it is backfacing, but at least there will be no holes.

Dicing Cracks

Surfaces which have displacement shaders are particularly susceptible to cracks; that is, tiny “pinholes” which occur along the edges where PhotoRealistic RenderMan has split the surface into subpieces for easier rendering. Any time a piece of geometry is split into two pieces along a new edge, the two sides of the edge must be displaced exactly the same distance at exactly the same spots. If the displacements don't agree, or what is even more subtle, if the displacements occur at slightly different positions, cracks can occur.

The first error to check for is if the displacement shader is asking for different displacements at the same point on the surface. For example, a displacement on a sphere had better calculate the same displacement along the longitudinal seam (the “0 meridian”), where s=0.0 on one side and s=1.0 on the other side. Similarly, all the overlapping points at the poles had better displace the same, or a crack will occur.

The strategies for removing other dicing cracks is the same for surfaces with displacement shaders as without. The RiAttribute("dice","binary",(RtPointer)&flag, RI_NULL) call will force some regularity about the shading positions along the edges, which will minimize the chances of a crack opening. Also, permitting large grids with the RiAttribute("limits","gridsize",(RtPointer)&size,RI_NULL) call will minimize the number of edges that exist in the first place.

Texture Mapping Cracks

Sometimes the amplitude of a displacement is calculated by a texture map lookup. Under certain conditions, this can cause cracks to appear along the edges between primitives, similar to the ones described above. This occurs when the micropolygons on one side of the edge are significantly smaller than the micropolygons on the other side of the edge (for obscure reasons, triangles often have this problem). The effect is that the larger micropolygons read a lower resolution (filtered) version of the texture while the smaller ones read a higher resolution. This means that the displacement is subtly different on the two sides of the edge; and, as mentioned above, a crack opens.

The best way around this is to disable texture filtering on those displacement textures. This is done by replacing the shading language statement

foo = texture(map);

(which invokes the standard texture filtering) by the more complex unfiltered request

#define EPS	(		1/1024)
foo = texture(map, s,t, s+EPS,t, s,t+EPS, s+EPS,t+EPS);

The value EPS represents one pixel at the highest resolution of the displacement texture, and can obviously be changed to accommodate the texture map you have.

CONCLUSION

With care, it is possible to use displacement shaders in PhotoRealistic RenderMan with no objectionable artifacts and with almost no speed penalty. The most critical parameter is the displacement bound. If the displacement shader is written using the correct coordinate system and user-adjustable amplitude control, this parameter should be easy to calculate. Since displacement shaders operate independently of surface shaders, it is possible to displace surfaces made of any type of material. Clearly, displacement shaders are a powerful tool for making interesting and highly detailed photorealistic images.