February 03, 2021
The Gallus-Gavialis is a predator living near swampy lakes. Its long nose is a useful tool for catching fish, yet it is incredibly fast on the ground thanks to its long legs. A mixture between a crocodile and an ostrich, hunters often refer to it as the Crocostrich!
The idea for the creature was to do some kind of mix between an ostrich/flamingo and a crocodile/gavial. After gathering inspirational references, the fun part begins ... Oh, and don't forget to follow along with the Maya project!
These are the steps we will go through:
For modeling in Zbrush, it’s good to start with bones. Many of them are from the IMM Dragon Bones Brush and some others are from older projects. This step is important because it allows us to visualize the overall proportions of the design, which helps us imagine how the creature will articulate and move.
When we’re happy with the bones, we can continue with the muscles. Starting from zSpheres and manipulating them with the Move Brush gives us a lot of flexibility during this exploratory phase. When we’re happy with the results, we can merge all the muscles over the bones using Dynamesh. From there, we only have to clean or reshape some areas, but the main volumes are settled.
Our next step is to create clean topology and UVs. Using Zremesher for the topology works great for still frames where matching underlying rigs is not important.
The UVs were created in maya and divided into 4 UDIMs. It’s not uncommon to go with 6 or 8 UDIMs, but in this case the UV shells were easily packable while allowing for good texture resolution.
For creating displacements efficiently, we can pack the displacement detail into 3 different channels that contain different levels of detail each. You can find more about this efficient workflow in the Making of Sinodon
Once the UVs are finalized we can take the clean model back to Zbrush, where we can confidently Subdivide the geometry as much as we need to in order to start dealing with skin folds.
Mari is a fantastic tool for micro details, where we can paint minute displacement detail and then import the displacement maps onto a ZBrush layer with the help of Jake Harell’s UDIM import tool for Zbrush. We can then use ZBrush layers to reshape and clean some areas to make them more or less visible.
Depending on your system resources, ZBrush will struggle at some point, so always keep an eye out for memory usage...in my case this limitation is around 50 million polys. Also, keep in mind that ZBrush quadruples the geometry density with each subdivision, so we need to make sure we build a base mesh that can reach the necessary detail without choking your resources.
Before going into Mari, let’s look at some useful tips for exporting texture maps out of Zbrush. This will help us further down the texturing process.
This texture map is great for broad details, skin folds, scales, etc. It serves as the base for the rest of our texturing work.
This texture map will give us great details in crevices and small details. You can also export ambient occlusion, but you will have to lower your subdivision level to around 1 million polys to make sure the export process is not too lengthy.
Last thing to do before moving to Mari is to export a decimated version of the mesh, we’ve settled at around 1 million polygons. By doing this we still keep a good amount of details on our model without having to use a displacement map in Mari. This method is more efficient as it does not slow down the software while texturing.
In Mari, the first thing we should do is create a new channel called “Utilities.” In this channel we will import the displacement and cavity maps which will help us drive the texture details later on.
While texturing it’s best to work with the node graph. It can be a bit tricky when you're not used to it but it is more flexible and efficient than layers. Using nodes is also a good way to keep things organized as it is easier to visualize the flow of the texturing network.
Using marble or dirt textures in overlay mode or multiply mode while creating the base color of the creature is a great way to start, this will give us some subtle color variations. Naturally, we have to set these variation layers to low opacity values.
When we’re done with the base color we can start to drive the detailing by using our “Utilities” maps as masks.
Once we're happy with the color, we can move on to the specular roughness and color maps. They are mostly driven by displacement and cavity details to drive contrast. Crevices always trap dirt and grime, so these should be less shiny, or rougher than the rest of the body.
When texturing a model it’s also important to start shading at the same time. We can start shading in RenderMan once we have started our base color. It’s a back and forth work between Mari and Maya and to avoid spending too much time on things you won’t need and you won’t see in the final render.
A great way to start shading is by plugging displacements together with the PxrDispScalarLayer. We can plug the Zbrush displacement in the base layer to get the overall shape, then we’ll split the Mari displacement into three channels (R,G,B) where we’ll plug the Red and Green channels into two separate displacement layers. We need to make sure we choose to remap the texture to “center” which will make 0.5 the value without displacement, meaning, anything less than 0.5 will displace in and anything above will displace out. We can keep the Blue channel to add fine detail through bump mapping.
Once we have everything connected, we can easily control the intensity of each displacement. This is very helpful to control skin wrinkles without affecting the intensity of the scales for example.
Once we’re happy with the displacement, we can move on to the rest of our shading.
The color texture is remapped for added contrast but it’s good texturing practice to avoid doing too many drastic changes in Maya. If there are big aesthetic or color changes to our character, it’s best to go back to Mari.
It can be difficult to get a precise value in Mari without shading context, so a good trick is to do a simple grey texture which we can tweak with PxrRamp or PxrRemap in RfM.
This is similar to our specular roughness, but inverted, as I wanted to accentuate the scale pattern, so the color of the crevices between the scales was duller than the rest of the surface of the scales.
For SSS we’ve created separate masks in Mari, each defining a zone of interest for subsurface. The idea is to split the RGB channels in Maya and use them to affect the intensity over a given area, for example, we can make the subsurface stronger on the mouth without affecting the legs or the scales. This is a great trick to efficiently control separate areas of our character without multiple texture maps.
Working with masks is a very good way to avoid too much back and forth between RenderMan and your texturing software during shading exploration.
Our subsurface masks also allow us to manipulate the weight of the subsurface vs the diffuse. Our creature is a mixture of soft tissue and hard scales, so we’re inverting the subsurface mask and using it to drive the diffuse gain. With this, any soft tissue gets increased subsurface scattering and areas with more scales get a stronger diffuse value.
If you want to add contrast or fine detail in the soft tissue you can blend your RGB masks with the EdgeColor or the SpecRoughness texture, as it follows the same pattern.
A great way to appreciate values and silhouettes in lighting is to light with a gray material on a displaced surface. This allows us to quickly see the aesthetic impact of the lights … and it’s really fast! Which is always nice when iterating through creative choices.
When we’re happy with the placement and intensity of our lights we can turn on the textures to see how it’s all coming together. Adjusting from here is much more straightforward.
Our lighting breaks down something like this...
It’s sometimes convenient to turn on “Normalize” in the light, which allows us to scale the light without affecting its intensity.
Here we are, the shading and the lighting are good to go, so lets render this beastly creature!
For rendering we can use a Holdout plane as a ground. This allows us to get the shadows of the creature on the ground without any background in the alpha.
When using Holdouts, it’s best to uncheck the Learn Light Results in the render settings section. This avoids any fringing artifacts in the holdout matte.
When trying to find good sampling values it can be useful to uncheck Incremental Rendering, which allows us to see the final noise level on the first rendered zones and then adjust the sampling accordingly. Combining this with a render region can be very useful.
Once our image is rendered we can add a background and do some color grading in Nuke.
Decreasing the MIP bias of your PxrTexture to -1 will bring back some small details in your texture. Combining this with a Micropolygon Length of 0.5, will add even more detail at the cost of longer render times.
Victor Besseau is a CG Supervisor that works for SUPAMONKS STUDIOS and a Lighting/Rendering teacher at ESMA Nantes.