Created by Leif Pedersen

© Disney/Pixar

Tutorials | Robot Room

Robot Room


Overview

Robot Room started as an interior study for indirect illumination and progressed into a scene with a robot couple taking a break while building their first robot son...what can I say, I like robots...

The scene was created to take advantage of many of the production-ready features in RenderMan for Maya and their integration with Nuke and Fusion, including various Pixar Surface materials, Maya Paint Effects, Daylight simulation, IES profiles and Volumes. We also touch on several crucial compositing features, such as AOVs, LPEs, Light Groups and Deep Data compositing.

Scene Breakdown

The technical goal of this scene, was to achieve realism in an interior setting without compromising speed, so I made objects clean and simple in order to rely on shading detail.

We are using textures downloaded from texturise.club for most of the scene, including our wall. We are also using Pixar Surface Material (PxrSurface) for most of the scene, as it provides an extensive amount of controls for our surfaces.

The scene also uses native Pixar procedural patterns as much as possible, including paint flakes and several noise patterns. Keeping shaders procedural allows us to tweak and extend them easily.

Shaders

There is a great variety of shaders in the scene, but they are all meant to look synthetic. From fabrics to metals, the scene is very tailored to the Robot's fancy lifestyle. The scene relies heavily on indirect rays, with indirect diffuse providing a lot of the interior lighting and indirect specular providing a lot of the robot's shininess.


Material Library

Get started with RenderMan shading networks

Learn More >

Diffuse

We are using simple diffuse surfaces to describe our walls and carpets. Not everything needs to be shiny, especially since not having to calculate specular saves us rendering time.

In order to keep things realistic, I usually add a little bit of irregular diffuse response. We can do this by adding some roughness to PxrSurface, this gives surfaces a micro-facet diffuse response thanks to RenderMan's Oren-Nayar diffuse model, instead of a traditional Lambert model, which is designed to simulate perfect diffusion...and we all know most surfaces are not.

The diffuse maps can be hooked up to the shader with a PxrTexture node or a Maya file node, RenderMan supports both natively. I've chosen PxrTexture for this scene, but you might want to use the Maya file node for better viewport feedback.

Color Management

When we use typical 8-bit color textures to drive color channels in our shader, we need to make sure we set our PxrTexture to Linearize. This will convert the color profile from sRGB to Linear, which will provide accurate representation of color and lighting. When using textures to drive data inputs, such as roughness or bump, there is no need to linearize.

It's very important to keep color management (gamma correction) in mind, or we will get inaccurate lighting results.


Color Management

Suggested tutorial on Linear Workflow in RenderMan

Check it out >

Fabric

The fabric is meant to be synthetic, so we've used iridescence to give it an otherworldly feel, combined with some fuzz and a normal map, we have a strange-looking, but robot friendly fabric.

It is best to use normal maps when possible because normal maps encode all required values in the RGB channels. The only cost is loading the texture tile, which is amortized by RenderMan's texture cache. Bump is more expensive because we need to evaluate the input 3 times instead of 1 to compute the orientation of the normal (tip by Pixar's Philippe Leprince)

To use normal maps in RenderMan, we need to use the PxrNormalMap pattern. We can load a file texture from the pattern itself or we can connect a procedural to the input normal attribute.

Metal

As expected, there are many metals in the scene, from shiny to dull or realistic to fantastic, so we're using a mixture of artistic and physical specular response in Pixar Surface to get have as much freedom as possible. A great way of getting good values for Refraction and Extinction coefficient when using physical mode, is going to refractiveindex.info, where we can choose convenient values from the 3D category and get very accurate results for our metals.

Extinction coefficient is particularly important, as it will be the determining factor for how energy is distributed across the surface. This can have a dramatic impact in realism for our shaders.

These values many times are significantly beyond a typical 0-1 range, so don't be afraid to push them.

For the dad we're using the PxrFlakes pattern to give him a shiny metallic paint look. The PxrFlakes pattern is not a color pattern, it is a Normal pattern, meaning, it perturbs the normal and acts like a normal map, so we can hook it up directly into the bump attribute of our PxrSurface Material.

Glass

Our robots have retro Edison light bulbs for eyes, so they need a glass housing (they also serve as robot-bifocals). We are also using a grunge scratch texture to add variation to the roughness of the glass. This will add some realism and slightly diffuse the filaments, adding a nice glow to the eyes of the robots.

Make sure to clamp texture map values to reasonable amounts, we've chosen values between 0.01 - 0.14. Using a PxrRemap pattern is very useful for this, as it allows us to manipulate the input and output values thoroughly.

Glass Shading Network

Glass shading needs many diffuse and specular bounces to allow the ray to pass through the object and shade what's on the other side, which can be computationally expensive. We don't want such high ray depth values for our entire scene, so in order to optimize this, we've assigned diffuse and specular overrides to our glass objects. This will help us render the interior much more efficiently, while making the glass nice and rich with many ray bounces.

To isolate ray depth per object:

  1. Select glass objects
  2. Go to the Transform node
  3. In the Attribute Editor go to RenderMan > Trace.
  4. Change Max Specular and Max Diffuse Depth. We chose 10 for Specular and 5 for Diffuse.


When compositing transparent objects like glass onto a background in post, we naturally need a transparent alpha channel, as is the case with the robot mom against the window frame. For this to happen, we need to make sure we turn on accum opacity in the integrator settings, or else the glass will have a solid alpha.

Paint Effects

RenderMan is able to render Maya Paint Effects natively (as curves), without conversion to polygons. This is a great time saver and very efficient to render, all we need to do is tell Pixar Surface about the Paint Effects brush colors. We can do this by adding a primitive variable, in this case: Cs (color), which will tell the shader to grab the color values from our paint effects brush.

Let's break it down:

  1. Select the Paint Effect shape node
  2. In the attribute editor go to RenderMan > Shading > Shading Group.
  3. Attach a RenderMan material to the Shading Group.
  4. Attach a PxrPrimvar to Pixar Surface Shader Color.
  5. Set Variable Name to Cs and Variable Type to Color.

Since Paint Effects petals and leaves are single sided, Make sure to turn on Double Sided under the PxrSurface Diffuse and Specular lobes.

Following these steps will ensure all shaded plants look much more realistic, by adding attributes which will make them interact with the rest of the scene in a more cohesive manner.

Lighting

The look-dev goal was to do a dusk environment, for that golden magic-hour feel.

This scene mixes realism and stylization for lighting. Even though we are using a daylight simulator, we are art directing most of the scene with manual kickers and fill lights when necessary. RenderMan provides the best of both worlds, so we can freely art direct our vision.

We are also heavily using light blockers and inverted light blockers to focus the light exactly where we want it, including removal of unwanted specular highlights in the lamp and avoiding excessive washing out of the scene from the rim lights.

The main light blocker is an actual piece of geometry, which is meant to simulate a long hall in the back of our camera, this is to avoid having too much light come in from behind the camera and flatten our scene. This allows us to get higher contrast lighting from the window and soft indirect light subtly bouncing in from behind the camera.



Daylight

To simulate a sun-drenched dusk, we used the RenderMan Daylight simulator. This tool provides a very handy "manual" mode which we are using to art direct the sun exactly where we need it. It also provides a great way to quickly setup lighting by stipulating a date, time and place, which is a great way of playing time traveler (I wonder what the day was like on January 28th, 1980...)

Due to internal optimizations, the daylight simulator doesn't need portal lights...so don't go crazy trying to make it work!

Blocker

Thanks to light blockers we can redirect light exactly where we want it. Going as far as removing specular highlights or specifying a perimeter of illumination for our light. All we have to do is right click in the light's Light Filter attribute and choose from a selection of Light Filters. We've chosen PxrBlocker for most of the scene.

In order to use the blockers as area of influence for the lights, we just need to invert the blocker. This will do the opposite of blocking light, it will only allow light emission inside the blocker area.

IES

A great way to add realism to our scene is to add IES profiles to our lights. These photometric data maps provide physical light distribution straight from manufacturer specifications, creating interesting patterns of light. We can also create custom stylized IES profiles for a balance between realism and artistic control.

Note that using IES profiles will affect the intensity of the light, sometimes by a big margin, so ideally start lighting you scene with the IES profiles already assigned to your lights. For example, in our Robot Room, the IES profile has an exposure of 8 and emits about the same light as a default sphere light with an exposure of 20.


IES Library

Suggested IES profiles in the RenderMan Community

Get Library >

Normalize

We are also using light normalization in most of our lights. This allows us to scale lights without having to worry about them changing intensity. Note that this is not physical, if we have two lights of equal intensity in the real world, the bigger light will be more intense than a smaller light...but it is nice to break the rules sometimes...

Temperature

Another nice feature is light temperature. We are using temperature for all our lights, including the robot eyes (filaments), this gives the scene a pleasing and realistic light color based on kelvin light temperature, for example, the average home lightbulb has a temperature of 2700 and an office neon light has a temperature of 4200, we're taking this into account when choosing light temperatures for our Robots...we want them to feel at home...


Overall, it's important to keep in mind that this is a creative tool, so don't be afraid to add lights and direct the eye where you want the viewer to look in order to tell the best story. In this simple example, I've tried to emphasize a little easter egg, so I've placed some lighting which will help the legibility of the books...these books will help the robots deal with any existential crisis...

Volumes

With volumes we can add a sense of atmosphere to a space, by providing some light rays and particles. With RenderMan, we need to think physically, but we also need to make sure we are efficient when rendering these effects, as they can be expensive to render.

Light Rays

The rays of light coming through the window are done using a simple cube volume primitive, which encompasses the entire living room. To shade this volume primitive, we need to attach a PxrVolume material to it and set the Density to 0.001, which will mimic a very fine amount of material in the air. We can also increase the sampling slightly for efficiency, 8 samples per pixel is a good number, and change the primary anisotropy to around 0.95, so that the rays are coming in more dramatically. 

For added control we are using two light rigs:

  • Daylight + robot lights + volume multiscatter
  • Distant Light + robot lights + single scatter.

One provides a haze and the other more traditional light rays, this will give us nice control in post.

We then create a separate scene and use Maya's Render Setup to non-destructively override shading attributes for all materials. 
We're also using this system to manipulate light rigs and separate them for our volume rendering.


Particles

For the scene particulate we are using Maya nParticle Sprites. This will allow us to attach a simple procedural shader that mimics dust, and with the help of a simple script we can randomly scale and rotate the particles upon creation.
RenderMan supports Maya particles natively, so simply assign a RenderMan material to them and render away!


We've created a procedural shader which simulates dust particles by using a combination of PxrFractal and PxrRandomTextureManifold. We combine these results with PxrBlend and PxrRamp in order to create random swirls and patterns in every particle.
Once we're happy with the resulting pattern, we attach the result to the material's Color and Presence.

It's important we have some sort of 2D manifold in our particle shader so that the particles don't swim through the procedural while animating. 

Now that we have a shader attached, we can add some variance to the rotation and scale of our sprites by adding an expression to the Sprite's Twist PP and Scale PP. You can see all this in detail with the supplied project.

nParticleShape1.spriteScaleYPP = rand(1.2,3);  
nParticleShape1.spriteScaleXPP = rand(1.2,3);  
nParticleShape1.spriteTwistPP = rand(0,360);


Render Settings

Like all projects there are optimizations to consider, and thanks to the RenderMan render settings we're able to use advanced features to make sure we are rendering in a reasonable amount of time and without artifacts. Lets go over some of the ones that stood out for this particular scene...

Rays

First, I optimized specular and indirect rays as much as possible. We are using PxrPathtracer with only 1 diffuse bounce and 4 specular bounces. I kept these settings low because I didn't consider the added bounces of light had a significant effect on the look I was trying to achieve.
This depth is insufficient for several surfaces, such as the glass, so we are manually increasing these attributes on a per-object basis in order to keep our scene as efficient as possible.

Dicing

In order to avoid any seams in our displacements, we need to turn on dice watertight in the advanced menu, this will ensure any UV seams are sewn together for a clean displacement result.
If you need to visualize extra fine detail for displacements, make sure to decrease Micropolygon Length. A default of 1 works for most situations, so we've left it at that.


Displacement Maps

Useful Zbrush/ Mudbox to RenderMan workflows

Learn More >

Caches

Another important factor when rendering, is understanding the cache settings. We need to make sure we have enough memory to crunch all the data in the scene, in order to avoid going to virtual memory (swapping) which will be significantly slower. For this, increasing the modest default cache amounts is important. Even though the scene is very memory efficient, I chose to use a total of 16gb for this scene to be on the safe side.

Typical memory usage should be around 80% of system memory, this leaves enough for the operating system and other applications to run reliably. When rendering locally, you will need to allocate less if your Maya session is very heavy and already using considerable amounts of memory.

Diagnostics

RenderMan comes with a very handy Diagnostics system which is incredibly useful for understanding where the renderer is spending its time. The stats file provides many interesting data points for tracing rays and memory consumption. One of the obvious ways to visualize the data is with the supplied heat map, which helps us identify problem areas which might need optimizations. These areas are usually not obvious to the naked eye, so having a complete breakdown of our scene is very helpful.

RenderMan statistics

We can turn the stats files on in the advanced render settings by specifying a name in the Statistics XML File...for example /scenes/robotRoom.xml which will give us an xml file in our project's scenes directory...I like to make a separate "stats" directory myself.

These files are viewable with a web browser and provide some very useful visual cues for our scene.


Debugging & Optimization

Advanced understanding of sampling and optimizations in RenderMan

Learn More >

Denoise

Using RenderMan's powerful Denoiser is crucial to keeping render times down. In this scene we are seeing significant speedups by deferring the last 20%-10% convergence to the Denoiser. We are also separating the beauty and volume passes into separate renders in order to take advantage of unique Denoise strategies.

Compositing

I used Blackmagic Fusion for the still version of the Robot Room, and for the animated fly through I used Foundry Nuke, as I needed Deep Data support. Both comps are available in the supplied project. We are not doing any mayor compositing, only some color correction, DOF, vignetting and camera shakes, since most creative choices were able to be finalized in the Maya session.

AOVs

These passes, or Arbitrary Output Variables, are practically free in RenderMan, because we are just requesting this already available data to be written to disk, they do not function as a traditional render layer or render pass, where data has to be recomputed.

Useful AOVs include: Depth, Normal, and Motion Vectors, all useful for post effects, such as Depth of Field, Relighting and Motion Blur. We're not necessarily using any of them for this scene, so we are not outputting any.


Advanced Compositing

Suggested additional training

Learn More >

LPE

Light Path Expressions or LPE for short, have minimal computational overhead because the lighting data is already available, we are only requesting RenderMan to write the data into a new image on disk. LPEs allow us to even separate each lobe from pixar Surface Material, giving us incredible flexibility for manipulating our scene in post.

Even though we outputted most out-of-the-box LPEs in order to have flexibility in post, I only used a subset for color correction. For example, it is useful to have indirect diffuse or specular LPEs to increase luminosity or saturation to certain parts of the image, this allows us to to art direct the viewer's focus in post.

Light Groups

We are using Light Groups, to separate the main light rigs, which I have arbitrarily separated into 4 categories called: House Lights, Sun Light, Robot Lights and Rim Lights

This allows us to bring extra emphasis to a specific light rig, in our case we are using the Rim lights and the Robot lights in order to increase contrast and lighting intensity on the walls.

To create Light Groups in Maya, we first need to create an arbitrary light group name in our light. This will tell RfM what we intend to do.


Then we need to go to the render settings AOVs tab and add a new beauty pass.

Make sure nothing in the Display side is selected or channels will be added to selected Displays instead of creating a new one.



Once we have the beauty pass, we can tell RfM that we want to isolate the light to only our new Light Group.


Using Light Groups is a great way to completely redo your lighting in post. Below we can see I was able to change my lighting to a night time lighting scheme just by using the Robot Lights, Rim Lights and a very small amount of the Sun Light rig.

Night time version of Robot Room using only LPEs in post

Deep Data

Deep Compositing refers to the use of Deep Image Data as part of the compositing process to accurately merge elements within transparent, intersecting, volumetric or otherwise disparate elements which would otherwise be impossible to merge with 2D compositing. In other words, in softwares such as Nuke, we can use Deep Image Data to merge objects through fur, glass, clouds, transparent leaves, etc, without any of the pitfalls of traditional compositing techniques such as the use of Zdepth maps, where edge filtering, transparency and sub-pixel accuracy were a problem.
RenderMan is able to output Deep Data through DeepEXR, a versatile image format with scanline decoding support and several other features which speed up compositing I/O.


From the Press

Great article on Deep Compositing by FXguide

Learn More >


RenderMan makes the output process very simple, all we need to do is:

  1. Create a new beauty pass.
  2. Change the Display Type to DeepEXR.
  3. Change the Channel Type to Point.


Now we're able to get a DeepExr image on disk which we can import into Nuke for deep data compositing.


Final Image

Using the versatility of RenderMan, we're able to achieve our look without sacrificing technical control or render times. Thanks to interactive workflows, we're able to look-dev our scene within the Maya session, which means Compositing has been kept to a minimum.

Final Robot Room Image

Recap

RenderMan allows us to iterate quickly and make many look development changes during an interactive session, which is a great way of minimizing post work in order to achieve our vision as close as possible in-render.

  • Remember to optimize shaders when needed, not all surfaces need to be specular, especially if the visual impact is negligible.
  • Adding surface detail and abrasion to our surfaces is crucial for realism.
  • Extend Maya's rendering capabilities greatly with Custom RenderMan attributes.
  • Don't get caught up in over emphasizing realism in exchange of creativity, RenderMan plays well with both aesthetics.
  • Make separate render passes for volumes to take advantage of volume-specific denoising.
  • Make sure to understand render settings well and what optimizations work best for your particular scene.
  • Use AOVs and LPEs to extend your art direction greatly in post, but remember RenderMan allows you to iterate quickly in Maya.
  • Use Deep Data compositing to do incredibly accurate DOF in Nuke.

About the Author

From pitch to delivery, Leif Pedersen is a CG Generalist who's worn many hats in production for over 10 years. His background in traditional arts mixed with technical foundations has allowed him to work with a varied client list in both television and commercial work, and has now joined Pixar's RenderMan team.


License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 
 
 

News | Check out a full recap of the Art & Science Fair!