Tutorials | The Making of a Samurai

The Making of a Samurai

Download Project - 11GB


I’ve always loved the look of historic Japan and the Samurai and have wanted to create artwork set in that era for a very long time. Originally supposed to be a short film about Samurai warriors, the film had to be cancelled due to unforeseen circumstances. However, the Samurai character was completed to full production standards and workflows.

This tutorial covers the creation of the character, focusing mainly on the lookdev of the skin, eyes, and hair using RenderMan for Katana and MaterialX Lama, a layered material system developed by Industrial Light & Magic.


The base model for the Samurai was created with MetaHuman Creator, which provided a very easy to use and interactive system for iterating on the look of the character without having to manually make tedious changes. 

The final character inside of MetaHuman Creator.

Once I was happy with the look in MetaHuman Creator, I exported the model and passed it on to Riccardo Gheller who took it into ZBrush and added further anatomical precision to enhance the realism of the model. Riccardo also handled the groom. I modeled physically accurate eyes in Maya based on physical data of real eyes and references. The final eyes were made up of a Cornea, Limbus, Lens, Iris, and Meniscus.

The final asset was made up of 3,236 individual objects.

MetaHuman export vs final model with new eyes.

Texturing and UVs

The UVs were done entirely inside of Maya. The full character spans across 113 UDIMs. It was very important to build the asset to an exceptional level of detail so that it would hold in any shot for the film.

Final asset with UVs.

The texturing was done in Mari. The first step was transferring textures from a VFace scan from TexturingXYZ by using ZBrush and the ZWrap plugin. Once this was done, the textures were taken into Mari and then cleaned up of any imperfections from the scan or the manual transferring process.

The Diffuse, Displacement, and Utility maps provided by the scan were used to create all of the necessary surfacing channels.

Build-up of the Diffuse channel and showcase of the other surfacing channels.

Texturing Tips


When creating the Roughness channel, keep in mind that oil builds up in the pores and therefore makes them less rough than the surface of the skin. This should be subtly represented in your Roughness map.

Clear Coat

The clear coat (specifically referring to the mask that is driving the shader used to create the clear coat) needs to have different values on different parts of the face, depending on your reference. However, usually the T-zone on the face is oilier and therefore requires higher values for the clear coat.

Clear Coat Roughness

Similar to the Clear Coat, the Clear Coat Roughness must be dialed in based on references. The Roughness value inside the pores should be higher than those on the surface. This contradicts the theory used for the Roughness but it helps balance out the pores and keep him from being too oily. If your character is very shiny and oily, you can make the pores shinier to match.


The Diffuse Mean Free Path is the distance light travels per wavelength, measured in r, g, b channels. This is a useful DMFP value chart for humans and are measured in centimeters.

Base value 1.9500 1.3000 0.8027
Lips 2.9000 1.8080 1.1937
Ears 3.8000 1.6077 1.2179
Cheeks 2.5000 1.3625 0.8750
Nose 2.0000 1.3333 0.8233
Nostril wings 3.9000 1.6350 1.0500
Eye sockets 1.5000 1.0000 0.6175

Look Development

For this entire project, we’re using MaterialX Lama for the shading. Lama provides artists a lot of flexibility by allowing the layering of lobes in arbitrary ways for look development, and since it was developed for visual effects by Industrial Light & Magic, there’s a benefit to relying on Lama for photorealistic characters because of its emphasis on physical correctness.

The Katana script for the human LookDev and rendering.


The skin material is made up of 4 main parts:

  • The SSS layer
  • The primary spec layer
  • The clear coat layer
  • The displacement and Bump to Roughness layers

Sub Surface Scattering

The “LamaSSS” shader drives the bulk of the skin material. Since the character has a lot of parts that are covered up by either clothing, thick hair, or armor, I made a mask for the SSS which is used to mix between two Lama bsdfs, the “LamaSSS” which is a heavy shader for the visible parts, and the “LamaDiffuse” which is a much lighter shader used for the covered up parts.

The IOR is set to 1.4 and the scale is set to 1. This is so that the values provided in the DMFP map are respected and not multiplied by the scale. The unit length is set to 0.1 which makes it a cm scene for the SSS calculation.

The shader uses the “Path-traced exponential” mode because although it is the heaviest of the modes computationally, it is also the most physically accurate.

Human skin is forward scattering so setting the anisotropy to 0.9 provides a realistic behaviour. The light scattering anisotropy is typically more prevalent around cartilage or thin skin, such as the ears. You may need to adjust the value depending on your target photographic reference.

We need to set the shader to not trace the meniscus in the SSS to avoid any dark lines where the meniscus intersects with the skin. We do this by creating a membership for the meniscus and adding that membership into the shader with a minus. We also need to tick “Continuation Rays”.

While here, we can also set the visibility and trace parameters for the meniscus, which serve to resolve the same issue of dark shadows and dark lines around the intersection points of the eyes.

Primary specular

The primary specular layer is made from a “LamaDielectric” shader set to physical mode with an IOR of 1.45. It is fed by the roughness map and the normals are coming from the Bump to Roughness node, which we’ll go over shortly.

Clear Coat

Much like the Primary specular layer, the coat is made up of a “LamaDielectric” shader, but with an IOR of 1.37. The coat is not fed by a Bump to Roughness node or any normals node for that matter. It is fed directly only by the Coat Roughness channel.

Displacement & Bump to Roughness

The displacement is where most of the detail for the skin comes from. The setup is quite basic, I’m using the RGB displacement map from my transferred scan from TexturingXYZ and then adjusting the value of each channel (which represent the primary, secondary, and tertiary displacement details respectively) to match my reference. This was done in a “PxrDispScalarLayer.” I also set the “MIP Bias” to -4 to ensure that a higher resolution mip was selected for all distances.

There was also a detail displacement pass extracted from ZBrush during modeling. I used the follicle mask from the groom to add a concave bump to this ZDisp pass using a “PxrBlend” and then layered this entire setup with the TexturingXYZ displacement to create the final displacement setup.

I used Bump to Roughness (BTR) to drive the fine porous bump details in the skin. This was so that the details and fidelity could be kept from afar even when aliasing would normally crush bump details driven using a bump shader. The BTR node pumps the bump details into roughness and anisotropy channels to maintain the same look when bump is no longer visible from a distance.

I offset the filter scale to 0.7 in the advanced parameters to make sure that a very high level of detail was kept in the texture mips.

You can feed the BTR node with your own roughness map by plugging your roughness map into the “Base Roughness” port. This is the roughness the BTR node will use when it is fully zoomed into the detail, so it is your highest level of accuracy.

I had to invert the bump normal and increase the “Bump Normal Gain” by a lot to get the look I wanted.


The most complex part of any human character is the eyes, both computationally and artistically. It is crucial to get them to behave correctly under the light as we can very easily tell when a characters eyes look fake and when we do it makes us read the entire character as fake, regardless of how good everything else looks.

I set the “Max Trace Depth” for the eyes to be quite high. I’ve found that a minimum of 4 for specular and diffuse works well. You can go higher up to 8 but I’ve found it has diminishing returns. I also set the “Shadow Exclude Subset” and “Transmit Exclude Subset” to the meniscus.

Cornea and Sclera

The Cornea and Sclera are driven by the same material. This is because they are a single mesh. We use a mask for the cornea to drive its size and location, this mask is used to mix between the Cornea shader (a “LamaDielectric”) and the Sclera shader (a “LamaSSS” and “LamaDielectric”).

The sclera SSS is setup very similar to the skin SSS with the difference being the IOR being 1.

The DMFP value of the sclera is very important in achieving the right look. In my research I’ve found that a DMFP value of 1.3 on the edges of the eyes and going up to 6 towards the centre of the eyes works really well.

Something else to keep in mind is that human eyes have a ring around where the Limbus meets the Sclera that lets more light through when it is lit from the side. This I call the “Limbus backlight.” This is a simple radial mask from the centre of the eye that I use to increase the value of the DMFP in the area around the Limbus to a value of 8. I also pumped in some warmer tones to give it a warmer SSS response to match references. This then allows the Sclera shader to mimic the same response to a side light.

Limbus backlighting OFF on the left and ON on the right. It's a subtle effect.

The Sclera has both displacement and bump driving it. The bump drives the lower frequency detailing and the displacement drives the larger scale details such as the veins.

The Cornea shader is very simple but has a nice trick to emulate the thin film layer found on top of the Cornea.

This zooms-in really well and smooth

I wanted the specular hits for the Cornea to be split into their respective channels to emulate the thin film look without using an iridescence shader on the entire Cornea. This is because the thin film layer on the Cornea is very small and its effects are almost not visible.

To do this, you take three identical shaders and split the “Reflection Tint” into fully Red, fully Green, and fully Blue in the separate shaders. Each shader must also have a “Transmission Tint” value of 0.333. You then add these three shaders together with “LamaAdd” nodes to give you one Cornea shader with combined RGB values and a Transmission Tint value of 1.

To create and control the RGB split for the thin film look, we use the Roughness parameter. If you want the blue channel to split more than the others, you can increase the Roughness value of the blue Cornea shader, if you want the green to split more, you do the green Cornea shader.

For my setup, the order for most amount of split is R > G > B. Each being offset in Roughness value by 0.01 from the other. This provided a very subtle look that I was happy with.


The Iris shader is identical to the skin shader with the exception being that it has a Scale value of 3, meaning the DMFP is multiplied by 3 in the shader, increasing the SSS effect.

The Iris shader is fed using a bump and displacement, with the bump providing the higher frequency and the displacement providing the lower frequency.

Due to the super fine details in the Iris, it is important to use a higher dicing value for displacement. I opted to use a value of 0.5, as opposed to the default value of 1. This provides 4 times more geometric resolution for the displacement shader to work with.


The Limbal ring is the dark ring that is found around the iris. It is the transition substance between the Iris and the Sclera. It too is a view dependant effect, only showing up from certain angles and lighting conditions. The Limbus is made from a “LamaSSS” shader and is mixed between a “LamaDielectric” with a mask which dictates where the Limbus appears.

We use the “Path-traced Davis” mode for this as it doesn’t need the heavy compute of the “Path-traced exponential” mode to look accurate. The “Shadow Color” is tinted to a fleshy color to further help sell the effect of the Limbus backlighting mentioned before.

The “LamaDielectric” shader is set to have an IOR of 0.976 and a roughness of 0.01. This is because it sits directly behind the cornea and it needs an IOR value of below 1 so that we don’t get stacked refraction between the two panes of glass.

Without limus on the left and with limbus on the right.


The Meniscus shading is super simple. It is a “LamaDielectric” with an IOR of 1.37 and a roughness of 0.09 with some bump provided by a texture map. You can also use procedural noises to generate this bump detail.


The Lens emulates the black part in the very centre of the eye between the Iris. Although this would be a hole filled with fluid in reality, light behaves by interacting with the back of the eye, and this is very computationally expensive to emulate correctly so we cheat it by using a piece of geo there. The shading setup for it is a “LamaDielectric” with an IOR of 1.1 and a roughness of 0.3. The roughness is fairly high because when light hits the Lens it has a very diffuse response.

It is important to remember that because the Lens sits inside the eye itself, we need to set the “Exterior IOR” correctly to 1.334 in the “LamaSurface” node that the “LamaDielectric” is attached to. This is normally set to a value of 1, which is for if the object is exposed to air.


The hair shading setups are simple as my character has very dark hair and not much variation that can’t be driven through the shaders. The shaders used are the new “LamaHairChiang” shaders that provide better realism in rendering hair than before.

I used a “PxrHairColor” to drive the look of the hair. I introduced some variation in the hair both by using the “Random Setup” which takes the primID of the groom and uses that to apply variation as well as by using a painted map which I fed into the “Stray Density” parameter. This map was used to drive the salt and peppering of the hair. I used this map to feed into a “PxrBlend” so I could blend two different values for the desired stray densities - higher at the painted salt and pepper parts and lower everywhere else while never being 0.

The “LamaHairChiang” shader itself is simple too, making only slight changes to the default values to get the desired look.

The above setup was used for all of the hairs on the character. I made slight adjustments to each material depending on which hair I was targeting. For example, the beard is less rough than the hair and also has less strays and less melanin.

Doing this material level variation for the entire setup provided the desired look for the groom.

Look Development Tips

Katana Graph State Variables

I used Katanas GSVs to help me while creating the character. They were used in both the materials and the root level. I used them to help me quickly change aspects of the project that I needed to, namely which version of the model was being fed into the render and which damage variants were being rendered. You can set up GSVs in Katana to help you toggle between almost anything. They are an indispensable tool.

Katana Interactive Render Filters

Katana IRFs are global overrides that you can apply to interactive renders while working. They can do a whole variety of things ... almost anything. I mainly used three: one IRF to turn off Displacement, another to increase the quality of the renders (increasing sampling and resolution) and the last one to change between cameras, for example between the shot and the floating LookDev camera.


I keep lighting for my look development stages simple and light, mainly using a HDRI. This keeps the rendering performance as efficient as possible, allowing for faster iterations.

For the hero lightRig used in the renders, I used a HDRI and 3 different “PxrRectLight”, one for the Key, one for the Rim, and one for the Fill. Each of them had textures attached to breakup their light outputs and give some dynamic range.


The project was rendered with the “PxrPathtracer” and RIS. Since I used RenderMan 25, I had access to Pixar's new AI denoiser technology, which I used on this project. It is a truly next generation technology that can save you countless hours in rendering and can now be used interactively with RenderMan 26.

There are some things that it is better at cleaning up than others however. For example, I noticed that at lower sampling it has a very hard time denoising eyes. This is of course to be expected as the eyes are super complex and take a lot of samples to converge fully. So when they have low samples the denoiser is left to do a lot of guessing. I let the render cook for as long as feasible for my sequence and then ran the denoiser on top to get rid of any remaining noise, which proved to be the best method for my use case.

Noisy vs Denoised render.

I used the checkpointing feature “Exit at” to control how long each frame was being rendered depending on the shot I was rendering. The average render time I was using for this character was 2 hours per frame.

For the full renders and showcase of the finished asset, checkout the video below:


About the Artist


Terms of Use

This project is available with a Attribution-NonCommercial 4.0 International License. This allows you to share and redistribute for non-commercial purposes, as long as you credit the original author.

Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0)


NEWS | RenderMan 26 is here!