Artwork by Romane La Rosa. Concept art by Raluca Losifescu.

Tutorials | Creating Baba Yaga

Creating Baba Yaga

Created and written by Romane La Rosa. Edited by Leif Pedersen.

From Myth to Rendering Folklore

Silence. Forest. Snow-covered trees. Wooden cottage. Fire…Those words usually conjure images of a peaceful chalet in the Alps in the heart of winter, but what if we added distant wolf howls, towering and dark creaking trees with a cottage perched on chicken legs (yes, yes, weird, I know) let’s add hundreds of human fiery skulls on spikes around the cottage which turn the night into day…

That got less peaceful, didn’t it? Allow me to introduce you to the world of Baba Yaga, the witch of Slavic folklore.

In the tales, her character is a double-edged sword, who provides life-saving help to the protagonist in exchange for something. If the hero fails in the transaction, the witch devours them alive! Totally badass.

Baba Yaga was a terrifying figure of my childhood, and I thought it was time to pay homage to her. Having found an illustration that faithfully captures my imagination, by artist Raluca Losifescu, I did my best to make her both repulsive and alluring.

Her glazed eyes seem to impossibly fixate on us, while her walking posture threatens to reach us with a smile, as she wraps herself in enchanting yet filthy garments. She probably has a plan for us, and who knows, maybe some wisdom.


I drew the character's body over the original drawing and created a profile view. At the same time, I gathered a library of images of elderly women's bodies.

Original concept art on the right, by Raluca Losifescu


I directly assembled primitives like cubes and spheres in Zbrush for a quick body blockout before merging a sphere for the skull and a cube for the jaw to symbolize the face using the dynamesh function. Since I didn't need the body because it would be covered in clothing, I blocked out the position of the arms and then deleted them.

I always start with a very low subdivision level to force myself not to get into the details right away and focus on the overall shape. I mostly used the move brush to avoid bulges from too many brush strokes. For more precision, the "polish" function of dynamesh can be activated, and the smooth brush can be set to "smooth stronger" in Open File → Brush → Smooth → smooth stronger, to smooth out the shapes.

Adding Details

Once satisfied with the overall shape, we can start adding details like the eyes, nostrils, and mouth by adding more resolution in dynamesh. It’s important to observe the mesh from multiple viewpoints to avoid having shapes that are too straight (especially in the top view for the head, as we tend to place the eyes on a straight line instead of a curve).

For the mouth, I created a cavity and then sculpted the lips around it, emphasizing their recessed appearance over the teeth. The gum is a separate mesh to maintain control over its visibility later on. To sculpt a single part, I used the shortcut shift + click and drag.

Zbrush modeling with Dynamesh

Once the model was complete, I exported it as an OBJ file and unwrapped the UVs in Maya before reimporting it into Zbrush. The purpose of this process is to sculpt micro-details as maps to project them and avoid models with too many polygons. In this case, my model is subdivided too much, but since my project is a single image, it doesn't pose a real problem. I would have approached it differently if the character was intended for animation or to be imported into a heavy scene with existing assets.

I subdivided the model with UVs x3 in Zbrush and used HD geometry. To use it, I pressed H with the mouse cursor at the desired sculpting location. The computer only displays the part we're working on and doesn't need to calculate the entire model. This allows us to sculpt in very high definition. Once the micro-details are added, such as wrinkles, folds, pores, and warts for Baba Yaga, we export the maps.

Zbrush HD geometry can add detail efficiently.

Let's Get Serious

As for the clothing, I modeled one-sided meshes in Maya, and extruded and sculpted them in Zbrush and paid close attention to the retopology. Then, I unwrapped the UVs in Maya again.

Ctrl + W → Mask by Feature → Border → Grow Mask → Sharpen → Duplicate the Subtool → ZRemesher (keep group, 1000, 30 adaptive size for more quads and simplification) → ZRemesh (repeat if necessary) → ZModeler and Extrude All.

Zbrush vest


Make sure you have ACES enabled in RenderMan for Maya. This workflow should be standard and automatic in the latest versions. 

I textured everything in Substance 3D Painter. For the skin, I gathered my modeling references and added some additional ones as needed (cheers to the rotten teeth). Since the original drawing had very little details for the clothing texture, I took some liberties and experimented with different textures throughout the process.

Baking Maps

The first thing I did was bake the curvature, thickness, and ambient occlusion maps using the displacement and normal maps exported from Zbrush.


For the skin, I knew I was going to use subsurface scattering. So, I painted a flesh map that represents the layers of muscle, fat, and veins beneath the skin. I did this by blending two noise textures in deep crimson tones, and added some quick veins on the temples and hands.

For the skin itself, I painted it by adding a fill layer and a black mask to work non-destructively, allowing me to change the skin color at any time. Then, I applied brush strokes with varying opacity to control the transparency of the dermis, making it thinner around the eyelids and wrists for example, and gaining a bluish tone.

Skin texture map

Once the base skin was added, I could incorporate potential redness, freckles, age spots, or more specific details like burst blood vessels on the nose. I particularly emphasized a purplish color on the eyelids, under-eye circles, and hands to convey aging skin.

For the roughness, I used the displacement map from Zbrush as a base and added some subtle variations by observing areas of the face that appear more reflective, such as the nose, mouth, and areas under the eyes.

Roughness varies depending on the area of the face


When it comes to clothing, I look for basic textures on sites like Texture Haven or Adobe Bridge, and import them directly into Substance as a base. I highly recommend using a single shader for all of the clothing if possible, as it is lighter and easier to work with.

Because using a single material can be more constraining, we can create changes in hue, saturation, and luminance with RenderMan primvars. I’ve created a float primvar for the cape, shirt, and vest, which we are then calling into our material network with a PxrPrimvar node. This pattern can feed the primvars into an HSL node to make changes on a per-shape level.

Once I have the base, such as a diffuse color and displacement, I use the baked ambient occlusion maps to add some history, referring to appropriate reference images. Creating stories with texturing is very important. Even something simple, such as a stain on a vest can tell a compelling story. You know ... maybe some mushrooms released spores on it while the witch was gathering wood for her evening by the fireplace.

Speaking of the vest, making realistic wool is challenging, so it's important to create an opacity map to simulate the space between the cloth fibers. Generators are also extremely useful, such as the dirt generator in Adobe Substance 3D, which is a mask that generates dirt based on ambient occlusion and offers a lot of control in its settings.


When starting the lighting process, I always begin by considering the narrative. To convey this story, the lighting is desaturated and high contrast.

It’s useful to start lighting using a mid-gray material in order to bring a sense of predictability to the lighting values without being influenced by colors. Also, a high contrast light rig doesn't mean things should be overexposed, so even in areas where the light hits the most, it’s important that values don't exceed a mid-gray. We can keep an eye on lighting values using the pixel readout in RenderMan’s Image Tool, or IT for short.

Working with gray materials can really help isolate the lighting

Rendering Tip: Set the render threads to -4 in the RenderMan preferences, which will free up 4 CPU threads for UI and other important processes by the 3D app.

Stay General

All the lights are normalized rectangular lights, which means that I can change the size of the light without changing its exposure. A small light will create a sharp shadow and a big light will produce soft ones.

It’s useful to start by placing some fill lights to control the value of the darkest shadows and don’t hesitate to hide every light in the scene when adding new ones, so you can perfectly visualize what you’re doing. Once I am satisfied with my fill light rig, I hide them and add a key light. Same thing with my rim lights. Once all three lights are well positioned and calibrated, I visualize them together. Main lights established, yay! I also like to zoom out of my image in the IPR to ensure the overall image remains readable.

Lighting Tips

DIFFUSER: A diffuser is a plane, or a mesh of your choice, that you will place next to the subject. It will create a really soft light on the character due to the Key Light rays bouncing on it and reflecting on the body. This is a very common technique in real life portrait photography, as it helps add a soft bounce of light to the subject by softening the shadows and harmonizing the tone.
BLOCKER: A blocker is also a mesh you will add in your scene, but you hide it in the render settings (Render Stats →Visibility). Thanks to RenderMan’s unbiased path tracing, you can place it anywhere you want and it will create an absence of light where you don’t want it. It is best to apply a Pixar Surface material to it, so you can control the opacity of the blocker, and therefore only softening some hard shadows here and there. Since using transparent surfaces will affect sampling, you can also use a Light Filter as a great way to block light non destructively.

Without blockers on the left and with blockers on the right.

GROUND PLANE: Using a ground plane under a character helps bounce light as a real floor would. Artists often forget this, but not having a floor deprives the environment of much needed bounced light. To do this, we can simply add a material to the ground plane with a ground texture in the Diffuse attribute.


Since we are texturing skin, the use of Subsurface Scattering is essential to give the skin its natural glow, which is caused by bounced light between the skin’s layers, called dermis and epidermis. Using the path traced SSS models will be the easiest and most accurate way to achieve this.

For skin, diffuse light is not necessary, so we can simply plug the skin diffuse texture map into the subsurface color and the SSS texture map into the DMFP, which will tell RenderMan how deep the light should go before bouncing back. Each color channel in this map represents a wavelength, so since our map is predominantly red, this is the wavelength color which will scatter the longest inside the head, giving us the characteristic red glow of skin.

I find that having the most control possible is always best for look development, so I always try to insert a remap pattern after my texture maps, to make last minute changes if needed.

For textures that require small details, like skin or wood, I prefer to always have a Specular Edge Color map based on the Displacement map, to make sure we keep receding values in little cavities such as pores and wrinkles.

Once I am happy with my textures, I add smaller details such as the Displacement Map and the Normal map. If you’re using a Bump, using a PxrBump pattern is needed instead. In my case, I have a Normal Map, so I used a PxrNormal with a low value to keep the material reasonably smooth.

To add a Displacement Map, you can select the shader (blue node) and right-click on the pxrSurface icon in the Maya Shelf, above the viewport, which will create a displacement pattern in the hypershade. You can then connect this node into the shading group of the material.

Here’s my node system. You have to select the displacement mode as Centered in the PxrDispTransform settings, this way mid-gray will equal no displacement and white and black will displace out and in respectively.


To shade the hair is very straight forward, as we're simply using a Marschner material and tweaking the look of our Xgen caches. Where it gets a bit more tricky is when we have to select many curves and shade them manually, which is what we need to do for the main hair.

To make this process easier, we can simply run a simple script, like so:

  1. Select all curves in the outliner.
  2. Click on the keyboard down arrow to select all shape nodes.
  3. Then run the following script in the script editor to attach the material, where Material_SG is your shading group.

string $mySel[] = `ls -selection`;
for ($item in $mySel)
connectAttr -f Material_SG.message ( $item + ".rmantorattr_customShadingGroup");

Once we have the material attached, you have to enable the rendering of the curves.

  1. Select all curves in the outliner.
  2. Open the Attribute Spread sheet.
  3. In the RenderMan tab, enable curve rendering.
  4. Change their width if needed.


Clean Scene

I created a clean Maya scene and then I imported hand picked elements from my working scene. It's crucial to keep the scene optimized and clean in terms of unnecessary elements, especially after a busy look development session.

Render Settings

I set up render layers to separate Baba Yaga from the background, which is a workflow I've incorporated recently and has really helped me gain control during post-processing and compositing.

As for the image size, it's up to you and your available computing resources. I usually like to render in HD, and occasionally 4k. XPU did really well in this scene and was able to render the final 4K image in less than 7 minutes with an NVIDIA RTX A6000 GPU.

For quality, the Pixel Variance attribute will be the main one to tweak. The Min and Max Samples can be set to 16 and 2048 respectively for most scenes and the Pixel Variance will determine when and where the adaptive sampling should stop.

The Denoiser works really well with average sample counts around 60+, so a Pixel Variance of 0.1 works well for this, and most scenes.

AOVs and Cryptomattes

To enhance flexibility in post-production, I set up Arbitrary Output Variables (AOVs) to render various passes such as diffuse, specular, reflection, etc. These passes can be used to adjust and fine-tune the lighting and shading in the final image/shot.

Furthermore, I also generated Cryptomattes, which are special image channels that provide mattes for individual objects or groups of objects based on their material or shader assignments with sub-pixel accuracy. This makes it easier to isolate and manipulate specific elements during the compositing stage.

Cryptomatte is a great way to get mattes with sub-pixel accuracy


After denoising, make sure to set Nuke to ACES. I didn't have much to composite since I only had 2 layers, which were combined using a simple merge.


The first thing I did was lift the blacks and correct the exposure of certain lights, such as the Keylight, which was too dark. To unify the image, I used the Bloom node by compositor Victor Besse. With the help of cryptomattes, I desaturated the t-shirt and saturated the turquoise eye on the left side.


I used merged cryptomattes with ramps to add precise and one-sided glow, such as enhancing the rim light on the cape. As the glow was blurring the details a bit too much, I added a Sharpen node on the cape and slightly on the face.


Since the nose wasn't standing out enough, I used the custom Relight node by compositor Victor Besse, who is a 3D artist and professor at ESMA in Montpellier.

Relighting in post is a very flexible way to achieve last minute changes...when possible...

Finally, to draw direct attention to Baba Yaga and focus the viewer's gaze, I added a vignette.

… and here’s the final image!

About the Artist


Terms of Use

This project is available with a Attribution-NonCommercial 4.0 International License. This allows you to share and redistribute for non-commercial purposes, as long as you credit the original author.

Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0)


NEWS | RenderMan is going to Siggraph!