Artwork by Matteo LLorens. Razorback Whiptail model by Kurtis Dawe.

Tutorials | Creating the Razorback Whiptail

Creating the Razorback Whiptail



Ssssssssshaders!


For some time now, I've been eager to learn how to use Mari software. When Kurtis Dawe offered several artists the opportunity to work on one of his assets, I immediately seized the chance. Being passionate about creatures and monsters since my introduction to 3D work, I enthusiastically immersed myself in this exciting project. Access to such a high-quality asset enabled me to focus on practicing the main areas I wanted to refine: creature texturing and look-dev.


Modeling

A crucial aspect for all artists is having a diverse range of references. I highly recommend using Pure Ref software, which is exceptionally convenient for organizing references that will be valuable throughout the project.

Begin with reality as your foundation, but don't hesitate to explore various projects completed by other artists. This approach allows you to ground your work in reality and then innovate to adapt to your own model. While making precise modeling decisions, keep in mind that these choices also influence texturing. Each decision goes beyond mere aesthetics; they need to be justified and should consider the environment in which the creature resides. Is it on a particular planet? Does it resemble an animal, reptile, or amphibian? Such considerations will contribute to creating the most realistic and convincing creature possible.

Inspiration and sources of the project


Modeling Tip

For the modeling aspect, I recommend exploring the YouTube videos shared by Kurtis Dawe. These videos provide valuable insights into his working methods, demonstrating how he approaches creature creation from scratch. He showcases his techniques for handling limb transitions, sculpting scales, and breathing life into his assets.

Moving on to the UV stage, it's essential to maintain symmetry to facilitate texturing in Mari. This allows for seamless transfer of textures, UDIM by UDIM, from one side of the model to the other. Ensure uniform UV density texel across the body's UDIMs to maintain consistency. However, consider adjusting texel density for areas like the eyes to achieve higher quality in the final render, especially as they are the focal point of the creature.

Modeling by Kurtis Dawe in Zbrush


Texturing in Mari

The methodology involves utilizing Substance Painter to bake utility masks, which serve as drivers for various maps in mask form. After baking all the necessary maps, they are imported into a Mari template, along with the displace and normal output maps from Zbrush.

Each mask is driven using the teleport receiver, and it's crucial to preserve the number of node paints and masks in the scene. This means that a mask used for color can also drive displacement or roughness, offering flexibility and efficiency in the workflow.

TIP

A Mari archive can also be downloaded. Just make sure that your version of Mari is at least 6.0v2.


Mari nodegraph, derivated from Loucas Rongeart’s template


The next step involves creating ISO maps, which serve as masks in the Mari tree. Once all ISO and bake maps have been generated and painted, there is freedom to experiment, modify, and blend the maps to achieve the desired outcome.

Each mask introduces variations in color, hue, and saturation, contributing to a natural and realistic appearance. The asset's hue/saturation/value (HSV) values are adjusted to ensure authenticity. Additionally, each scale exhibits unique characteristics, with factors such as the creature's posture influencing wear and tear. For instance, the creature's belly may show more damage due to frequent contact with surfaces, while its tail and spikes may exhibit wear from combat or environmental interactions.

ISO mask used in mari to drive my COL/DSP/RGH/SPC projections


When working on shader previews, it's advisable to focus mainly on the displacement channel, minimizing distractions by restricting the preview to a single channel like albedo. This approach helps maintain clarity and prevents potential confusion that could affect decision-making. Nonetheless, periodic checks ensure that everything remains in order.

Utilizing the teleport receiver and bake points allows for a streamlined workflow, enabling adjustments to lighten the scene for a responsive viewport experience. This optimization minimizes time spent on selections, facilitating efficient progress.

Once all maps and ISOs are synchronized, duplicating the left-hand side of the model onto the right-hand side using UV symmetry, as outlined in the modeling section, is recommended. Subsequently, introducing procedural asymmetry through triplanar projection of rock, noise, and oil maps enhances variation across ISO, color, and bake maps, fostering a more organic appearance.

Creating masks in substance painter for generating more variation : DrySkin and scale variations.


Texturing Procedural

To introduce maximum variation, integrating ISO maps and displacement/utility maps with rock and granite maps in triplanar projection is essential. This approach enhances detail significantly, facilitating smooth transitions between each layer level created within the Mari node.


Workflow texturing mari : blending paint ISO and rocks texturing projection


Following the procedural pass, which combines scanned map projection and procedural nodes, a final texturing pass is conducted to incorporate a myriad of micro-detail variations. Substance Designer and Substance Painter are utilized to modulate HSV variations on the scales, as well as to introduce subtle details of dead skin, strategically placed at specific ends of the scales. Throughout this process, considerations are made regarding the creature's movement and environment, referencing texturing materials to inform decisions.


Previz UV in mari with albedo map


The final addition involves enhancing certain specular highlights along the edges of the scales, while simultaneously augmenting detail in the albedo. Once the mask creation is complete, the focus shifts back to Mari for texture publishing, marking the transition to the Lookdev stage.


Workflow texturing mari : compilation of nodal subdivisions in mari


Shading

For the Lookdev scene, prioritize using neutral lighting, adhering to classic Lookdev principles utilizing chrome balls/grey balls and a Macbeth chart. Implement a key light, a backlight, and an HDRI from Polyhaven to serve as a fill light and establish the boundaries of a studio environment. While maintaining neutrality, feel free to infuse a personal touch into your Lookdev scene. This not only sets the stage for shading your assets but also enhances the presentation, making it more compelling for potential professional studios or clients.


RenderMan Lookdev scene viewport with the lighting setup


Regularly switch light rigs during the shading process to ensure the asset remains as neutral as possible, adaptable to various environments and lighting conditions. This approach minimizes the need for extensive shader adjustments for each shot. Utilize HDRI environments such as full sun and a diffuse gray sky to fine-tune your shaders effectively.

TIP

Make several tabs in which you can store all your shaders and move around your hypershade more quickly.


Shading specular setup


Shading Tips

The first crucial step in Lookdev involves setting up the displacement output from Zbrush to ensure minimal loss of frequency between Zbrush and RenderMan. A meticulous displacement setup is employed, which categorizes the levels of detail per map into primary, secondary, and tertiary.

Each map is exported with an average gray value of 0.5 and is then centered in the PxrDispTransform node in RenderMan to prevent model distortion. This ensures that the value of 0.5 is not displaced.

The PxrDispScalarLayer enables precise control over each map, allowing adjustments according to the intensity of the PxrDisplace. Additionally, employing a PxrBump with a minimal value aids in doubling the displacement maps, thereby recovering some of the lost volume frequencies during map exports.

The primary displacement map, exported from Zbrush's Subdiv 3 of the Kurtis model, retains essential bake information in topology to accurately transcribe all displacement data. The secondary map, exported from Mari, reflects changes in volume based on color variations. Lastly, the tertiary map is generated using a mask created in Substance Painter, then imported into Nuke for blending and the creation of additional volume variations. This process involves enhancing the map with a Highpass filter and utilizing expression nodes to introduce small details such as dead skin and minor breaks.


Shading displacement setup


In the second phase, when working with specular components, employing three distinct lobes maximizes the shader's capabilities.

The first lobe, Primary Specular, is governed by a map that regulates the specularity in specific regions, such as the paws or the curvature of the scales. This same lobe is also influenced by a scalar roughness map, determining the extent of specular roughness across the scales.

The second lobe, Rough Specular, contributes to an overall specular effect across the entire model, exhibiting a significantly rougher appearance than the first lobe. This helps soften the spread of specular highlights over scales and spikes.

Finally, the Clearcoat lobe enhances extra specular highlights on the mucous membranes inside the mouth. This lobe is controlled by an ISO mask map, restricting it to designated areas where additional specular highlights are desired.


Shading specular setup


Once the displacement and specular components are properly configured on your assets, the final step involves refining the albedo to determine the color and subsurface characteristics of the scales. Varying the gain ensures that the sum of color, subsurface, and single scatter approximates 1. For the mucous membranes, accentuating the viscous aspect within the mouth is achieved using the single scatter lobe.

Maintaining distinct parameters for scales and mucous membranes allows for maximum control over the shader, minimizing the need to switch back and forth between Mari and Renderman. Leveraging the available parameters on the PxrSurface shader is essential, keeping in mind that parameters are constrained by the gain. For instance, adjusting the mean free path distance in the subsurface requires adapting shader parameters to the gains on each lobe.

Shading color setup


After completing these steps, thoroughly test the reliability of your shaders under various lighting conditions before proceeding to shot and environment settings. This ensures the shaders perform consistently across different scenarios, guaranteeing optimal visual quality in your final renders.

Vignettes of the Razorback Whiptail


A higher resolution render at this point doesn't hurt either, as it helps us check for any texturing inconsistencies under ideal conditions.


Lighting

Due to time constraints preventing a full CG shoot, I've chosen to integrate my Razorback into various environments as an alternative approach. This method offers valuable insights into asset compatibility with different settings and lighting conditions. To source diverse environments, I extensively utilized Shotdeck, a platform providing access to film references. Countless hours were spent meticulously selecting the most compelling references, with many sourced from the 2019 Disney film, The Lion King. These references serve as excellent benchmarks for aligning my creation with established film sets.

Integrating my creature into these environments serves as an invaluable exercise in lighting, enabling me to refine my skills in image analysis. This process involves analyzing light sources, their quantity, intensity, exposure, temperature, and more. Each integration serves as a lighting workout, offering opportunities for growth and development in understanding and manipulating light within a scene.



Lighting Tips

Engaging in a light matching exercise like this serves as a highly instructive method for refining lighting skills. Through careful analysis of light sources, their intensity, and how they interact with the asset, one can enhance their visual acuity. Each question posed during this process contributes to a deeper understanding of reference analysis and lighting principles.

Upon completion of the lighting match, leveraging AOVs (Arbitrary Output Variables) provides maximum flexibility in compositing. These additional render passes offer precise control over individual elements, allowing for fine-tuning and refinement in post-production. By harnessing the power of AOVs, artists can achieve greater artistic control and produce polished, professional-quality results.

Put an image in the viewfinder of your camera to match the composition of the reference as closely as possible


Rendering

When setting up render layers and AOVs, it's essential to output one Light Path Expression (LPE) per main light. Alternatively, lights with similar functions can be grouped together, such as rim lights. However, excessive grouping may limit flexibility in compositing. Additionally, separating diffusers from speculars enables fine-tuning and correction of any discrepancies between shading and lighting.

In an ideal scenario where everything is perfectly calibrated, there may be no need to separate AOVs in this manner. However, in cases where a light's intensity is too strong, leading to an exaggerated specular effect, separating AOVs becomes invaluable. This allows for targeted adjustments to mitigate issues and achieve desired visual outcomes during post-production.


Image sampling parameters and AOVs (Arbitrary Output Variables)


Rendering Tips

To streamline the denoising process and facilitate mask creation, it's advisable to separate the beauty pass from the Light Path Expressions (LPEs) and utilities. This separation ensures that denoising each AOV individually becomes more manageable. Additionally, extracting a cryptomatte by material or object enables rapid mask creation, even if assets are in motion, as the masks will remain consistent.

By separating the beauty pass from LPEs and utilities, denoising becomes more efficient, as each component can be processed independently. Moreover, utilizing Cryptomatte for mask creation offers unparalleled flexibility, allowing for precise control over various elements within the scene, regardless of asset movement. This approach enhances the overall post-production workflow, facilitating smoother and more precise final renders.


Setup the Cryptomatte by object to create masks for compositing


Compositing

For compositing, consider employing an additive reconstruction approach using the shuffle node to reconstruct your entire set of AOVs and restore your final "beauty" image.

Once the reconstruction is complete, you can enhance the light and color intensity to closely match the references using utility nodes. Additionally, ensure that defocus and motion blur align with the references, if present.

Conclude the compositing process with a final color correction pass using glow, vignetting, and grain to seamlessly integrate the asset into the shot environment. Custom-built nodes or those sourced from Nukepedia can provide access to technical tools for precise adjustments, ensuring accuracy in the final result.

Additive compositing template


Compositing Tips

To closely match your references, adjust the gain and gamma at both ends of the spectrum to align the grain and black-and-white points. Exploring extreme values can provide additional visual and technical insights to fine-tune the image and enhance integration.

It's important to remember that you're matching a reference with different assets, which may result in variations in shaders, specular highlights, and contrast. For instance, scales will reflect light differently than hair due to their inherent properties. Therefore, it's essential to adapt your adjustments accordingly to achieve cohesive visual consistency while accommodating the unique characteristics of each asset.

Comparative gamma and gain correction to match the last details of the image to the reference


Utilize the node switch and the W key to facilitate an effective comparison between your reference and your rendering. Examine your reference closely, zooming in and out as needed, and test various screens with different colorimetry and calibration, as these factors can influence perception.

By toggling between the reference and your rendering, you can identify areas where adjustments are needed to achieve closer alignment. Pay attention to details and nuances, ensuring that your rendering faithfully captures the essence and characteristics of the reference image. This meticulous approach enhances the accuracy and quality of your final output.

Comparison of reference and final image after compositing


To finish up, here are some renders matching film plates. What are you waiting for? Download the project and have fun with the Razorback Whiptail look-dev project!

Matching footage with our Razorback Whiptail

 
 
 


About the Artists


Terms of Use

This project is available with a Attribution-NonCommercial 4.0 International License. This allows you to share and redistribute for non-commercial purposes, as long as you credit the original authors.




Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0)

 
 
 

NEWS | Join the RenderMan "SciTech" Art Challenge and win amazing prizes!