February 10, 2025
Created and written by Fady Kadry. Edited by Leif Pedersen
Dragon Treasure is a project by Fady Kadry and is inspired by fantasy aesthetics. This tutorial highlights some fundamental pipeline benefits of geometry instancing and procedural shading for treasure and environments with RenderMan for Katana and Houdini.
The gold treasure material is a very simple shader. Its textural detail is driven by a Voronoi procedural driving reflective roughness attributes. The pattern setup is a Voronoi noise with a large Voronoi scale connected to the roughness gain component, this allows for a rich textural quality which adds a dirty and ancient feel to the treasure.
Simple shading goes a long way with procedural patterns
It’s incredibly important to get these 200+ million coins to render in a quick and optimized fashion and more importantly, parsing the scene efficiently is crucial for interactivity. Using instancing to achieve this really helped keep deadlines on track, as the first pixel went from 2 hours to only 5 minutes, making a huge difference to a humble single node render machine.
Houdini was a great tool to use to scatter a large quantity of gold coins and treasure. I started with a simple for-each loop to vary shapes of gold coins, cups, swords, helmets, etc. By utilizing height-field scatter trickery, I managed to create an underlying terrain geometry, which also had the same gold shader applied to it … welcome to CGI, where it’s all about faking things!
Shuffle different shapes to create the gold treasure
At this point, my instinct was to export a single USD file, which has a surprisingly (dare I say, satisfyingly) slim footprint on disk. Unfortunately, this workflow turned out too good to be true, as this independently loaded each piece of treasure … times 150 million.
This unexpected result led me to ask some experts if there was an optimization step I missed along the way.Nemanja Stavric (a dear friend and colleague) gave me the idea of combining them instead, so that we’re instancing larger clumps of treasure instead of individual pieces.
This solution helped me reduce first pixel times, at the expense of slightly higher memory usage.
Combining clumps of treasure and re-scattering using height-field scatter
Let’s start with the scattering of the gold sea that was static. In this portion, I used the height-field scatter technique. I started with a pre-simulated clump of coins, armor pieces, and other miscellaneous assets to create the look of not just coins, but a busy cluster of treasures.
After doing a quick sim to relax the pieces in a ground plane, I exported the results to a single USD file. To reach the shape of the Gold Sea quickly, I used a few very simple height-field techniques to sculpt the final shape.
Using the same output, and utilizing the amazing heightfield scatter node, with some mask painting and after some trial and error trying to figure out scatter density, I was able to find the right ratio of coverage for the composition, while not leaving any gaps…and since this is never perfect, I applied the same gold material on the base geometry to fill in any remaining treasure gaps.
At the beginning of this journey, I started to experiment with ways to create a simulation with a fluid-like coin motion. In one of my first tries, I created a flip fluid tank and ran the simulation with very low particle separation density. This gave me a manageable amount of points to instance from. From there, I tried running rigid body dynamic sims.
This ended up being highly computationally expensive, so instead, I distilled my approach from separate, multi-million-points that clogged my 256 GB of RAM, to creating clusters of treasures (same technique as I used in the scattering step) and with very efficient usage of collision geometry to avoid long and accurate collisions. This method resulted in a much faster, albeit a less accurate simulation.
To speed things up even more, I localized where I wanted to sim using separate caches which were all timed to run in a sequential manner to suit the art direction I was looking for according to the camera movement.
Thanks to the way the shot was laid out, I managed to accomplish a contained workflow to reach a nice look using a single column which I built procedurally in Houdini.
Column setup in Houdini
I approached the geometric proceduralism using voronoi fractures on the column, which combined with point wrangling, allowed me to wobble the edges, for a worn out marble column aesthetic.
Column export using USD to import as scattered geo
To keep the procedural wins going, I used RenderMan patterns to shade the columns. By using PxrRoundCube, I was able to achieve tileable texturing techniques in Katana, which was great because I was able to generate convincing textures without having to go to a painting application to generate new textures.
PxrRoundCube can be connected to multiple texture maps
A dragon’s lair has an epic story to tell, so conveying aging was crucial. Using PxrDirt was a really great way to communicate a worn out effect by adding additional grime and dust accumulation in the cracks. Leaving the pieces of column geometry separate also helped isolate this effect in a convincing way, which ultimately gave it a more natural look.
Now that things have come together, it’s time to set up the Katana rendering, including collecting everything, setting up the different PrmanGlobalStatments, RenderSettings, AOVs, and streamlining the scene for different passes.
For maximum control in compositing, I decided to split the scene into three different passes, Character (Dragon), Gold (all treasure), Environment (columns, cave ceiling, etc), and AO.
For the import stage, I used mostly usdIN in Katana, it’s now a native node that comes with the pixarUSD installation that occurs automatically with Katana, and for some static assets, I used alembicIN.
USD Katana scene ingestion
If you also notice in the screenshot below, there are some attributeCopy, attributeSet nodes. These were placed there to help me with the __Pref attribute as I forgot to add it properly from Houdini before the export. __Pref is really handy when doing localized grading in post and in the included Katana scene, you’ll be able to gain more perspective on these attributes and their contributions.
Overcome import hiccups with RenderMan attributes
LPEs in this project are an import from a previous project (Rivendell fan art), and the beauty of Katana is that you can use nodes, or even a whole template between projects, and also between version of the software (for instance, the LPEs group were ported from Katana v3.5 and this project was done using Katana v6.0 and also tried the same Katana script with v6.5).
AOVs and LPEs in Katana
I used the RenderMan machine learning denoiser to great extent. It helped me a lot when I needed a quick turn-around for a test and was crucial to cut down on final render times, so that the image doesn't have to reach full convergence before the denoiser can clean things up.
Denoise before and after. Zoom in on columns and treasure.
Inside the Render Elements Node, there are a variety of AOVs and LPEs which are ingested into the rendered EXR file. This gives the Denoise application the necessary data to execute.
Denoise node in Katana
This project was also an opportunity to learn about deep rendering, as it was one aspect that I never touched upon in my previous projects … and it turned out to be a lot of fun to deal with.
To make this work in RfK, all we need to do is change the file type to deep and RenderMan for Katana will output a deep EXR format automatically.
Denoise node in Katana
This deep data in Nuke allows us to get pixel perfect compositing of very fine detail. It’s also pretty fun to navigate in Nuke too!
Deep data in Nuke
Although it was a new concept for me to deal with Deep compositing, I found it quite a fascinating concept and toolset to use and it was especially excellent to add 2D volumetrics to mimic distance fog during compositing.
2D fog in Nuke
And finally, Here is a look at my comp treatment and final scene graph in NukeX. You'll need to render the frames locally, but the entire project will hopefully be an insightful look at a productive pipeline, which you can gather ideas from and copy to your personal projects.
If you're interested in learning more about this great project, make sure to visit Fady's website by following the link on his bio below.
Go and play with the Nuke script!
Fady Kadry is a CG/DFX supervisor, who's worked at some of the biggest studios in the world, from Double Negative, Weta Digital, Scanline, Industrial Light and Magic, Method Studios, DNEG, ReDefine VFX, and currently back to DNEG.
This project is available with a Attribution-NonCommercial 4.0 International License. This allows you to share and redistribute for non-commercial purposes, as long as you credit the original authors.