© Le Creuset

Stories | Le Creuset, but Digital

Le Creuset, but Digital

Written by Alexander Weide. As originally published in Digital Production.

Creative Product rendering

In early September 2021, Chris Bradley of FerebeeLane agency approached me through Adobe's Behance platform to discuss a project for their client Le Creuset. In October, we had a kick-off meeting in Zoom and a very nice conversation about the general project idea and budget. We came up with the idea of growing metal patterns, similar to what many of us remember from the intro of the Netflix series "The Crown." Of course, we didn't want to recreate the effect, but we really liked the overall look in combination with the music. It was a great source of inspiration.





The Project

Agency: ..................................................... FerebeeLane

Client: ............................................................. Le Creuset

Producer: ................................................ Lanford Swan

Creative Director: ............................... Chris Bradley


After these initial discussions, I created a very rough animation of what this might look like. I immediately sat down in front of the 3D program Houdini 18.5 and created a quick test animation within two hours including rendering in Pixar's RenderMan XPU. Rendering the entire sequence took less than 50 minutes in 4K. I then inserted the test animation into a test trailer provided by the FerebeeLane agency. That very same day, we had a new basis for discussion for further considerations.

Screenshot in DaVinci Resolve 17 Studio – test of the animation idea.


Tasks

After several discussions, we agreed to start the project and make it happen in October 2021. My job on this project was to deliver some key simulation rendering scenes as well as to recreate the new "Le Creuset Dutch Oven" in 3D from start to finish within four weeks. That's a very tight schedule for a single artist, but we finished the 3D production part with two days to spare.

When the project finally launched on October 1, 2021, I set up an exclusive Discord server for faster communication with all project members.

Communication

I am very happy and grateful that this was possible, especially because nowadays it is quite new for many agencies around the globe to communicate so quickly. Discord Server is a communication platform similar to a Whatsapp group, available cross-platform on smartphone, Linux / Mac and Windows. But Discord is much more, it's possible to create chat channels that are subject to a rights structure, which means that certain channels can be assigned to individual employees, this means employees see only what is important for them. Actually, Discord Server is a communication platform from the gaming scene, where you can also talk and communicate verbally with your headset. Discord Server is ultra-easy to set up, free of charge and is becoming more and more popular as a communication base that replaces traditional email in home office and remote work.

RenderMan Discord


Did you know RenderMan has an unofficial Discord server where you can chat with developers and users?


“This project changed my entire workflow to a more efficient, quality-driven workflow based on RenderMan's multipass rendering.”


Of course, there are other similar communication systems, the best known example being Slack. Without digressing, I can say that Discord Server provided us with a communication base for more detailed discussions about all aspects of images and for sharing ideas. In my experience, email communication can only be a small part when collaborating around the globe. It has been an amazing experience to be able to have a direct line to everyone responsible within minutes, not hours. Perhaps it's because this method of communication helps overcome the challenges created by the vast difference between time zones.

Converted "Dutch Oven" as a polygon model.


Modeling and Look-Development

In order to start the simulation process, it was necessary to optimize and recreate the CAD data of the "Dutch Oven" in Houdini. CAD data is usually not suitable for use in a simulation process, as there are often problems with NURBS (Non-Uniform- B-Spline) when they are converted to polygons. This messy data needs to be corrected. Also, the converted models must be cleaned of data that is not needed for the project.

Imagine a car that you only want to show from the outside, but you have the engine inside with all the thousands of bolts. These large data sets need to be cleaned up. Houdini makes it possible to solve the task relatively quickly. The converted data had some very bad polygon intersections, duplicate faces and hard edges, typical characteristics of converted CAD data. With Houdini it was possible to solve these problems. However, in my personal opinion, Houdini needs a lot more CAD conversion tools in the future to better meet the needs of product visualization. However, Houdini has other great tools to accomplish the same thing, but this sometimes requires real out-of-the-box thinking.

In Houdini 18.5, I started by distributing millions of points on the flat CAD surfaces and converting them to VDB Signed Distance Fields. Then I started using the very important masking features in the VDB tools to shape and recreate the realistic look of the "Dutch Oven." CAD data often does not have rounded corners, as I mentioned earlier. These rounded corners are not present in the CAD data because they are created in the casting process of the real "Dutch Oven." Using the VDB tools, I was able to recreate the typical look of fabricated metal. However, this process has one drawback: you get tons of polygons. Fortunately, Houdini also has very good retopology and modeling tools, which were also necessary to recreate some parts of the model.

The image below shows the process tree of all the model changes in Houdini. Most of them are interconnected VDB tools.

Reconstructed 3D model.


Retopology

After completing the modeling phase, I thought about doing a retopology. However, we decided against it because a retopology is always very time consuming if it is to be done properly and, since we are using RenderMan, it was not absolutely necessary. RenderMan can render huge amounts of data, billions of polygons and millions of particles, but more on that later. I optimized the model as much as possible using various workflows I developed myself. I also reduced the polygons.

The UV unwrapping was also largely procedural. In the process, I was able to slit and unwrap the individual UV patches fully automatically, with the exception of a single edge. The outside edge of the pot gave me 3,000 clicks. I had to manually select over 3,000 edges on the top edge of the "Dutch Oven." That took almost 2 hours. Sometimes procedural workflows aren't always the best for the schedule, and manual selection can be faster.


Texturing and Test Rendering

As usual, I chose Adobe tools for texturing 3D assets. I used Adobe Substance Designer and Adobe Substance Painter to get the job done efficiently. These tools also allowed texturing in the ACEScg color space, an important detail if the client places a high value on natural color reproduction. ACEScg has been established in film and advertising for several years. The traditional SRGB or LOG color space has little meaning in much of the high-end post. When Netflix made the switch to an ACES pipeline, I did the same a short time later. ACES is one of the few color pipelines ensuring that the final animation or movie will display true color on any final output device.

The ACES workflow enables standardized output and conversion of image and color material. Of course, the final output color spaces are again in the SRGB or LOG / Ultra Wide Gamut color spaces. Nevertheless, there would be massive color differences here if production had not already taken place in ACES.

Houdini process VDB modeling tree.


In addition, RenderMan 24 has been standardized for ACEScg. In RenderMan nowadays it is also quite easy to export Painter materials directly to the RenderMan Shader Library. This means that you basically only have to worry about converting the texture materials to the RenderMan-specific text file format.

Screenshot of the Adobe Substance 3D Assets database


The entire export process is largely automatic. Creating shaders in these tools also allows for easy cross-platform use. You can open the contents of the library in Maya, Houdini, and even Blender and get the exact same result with RenderMan 24 in any of these programs, which is an outstanding, and previously unheard feature of RenderMan. In recent years, I have also purchased procedural shaders and materials on the Adobe Substance 3D Assets platform. This has always helped me a lot in meeting my budget and time constraints. 

Image of the finished textured model in Adobe Substance Painter


Within the first four or five days, we had a fully textured and renderable "Dutch Oven." All textures were exported in ACEScg color space for maximum color accuracy for later editing in DaVinci Resolve.

Render preview in RenderMan for Houdini


The image above shows one of the initial rendering results. I did everything I could to make sure the result matched the photos provided by FerebeeLane. I was also concerned about how the enamel would be applied to the product, so I created layer-based shaders to simulate every aspect of the real product. I also kept in mind that the whole thing would have to be rendered in time.

Our longest shots had render times of no more than fourteen minutes per frame, but these included hundreds of millions of particles. The "Dutch Oven" itself had render times of about five minutes in close-up. This always included rendering with subsurface scattering and refraction effects enabled.


Render and Compositing Setup

Since the first day of production, we've been thinking about how to not only stay on schedule for production, but also how to have the most flexibility possible (i.e. post-production, editing, client changes, etc.). We decided to leave some room for experimentation in compositing. We thought about having the ability to completely change any render shot in compositing if necessary. Everything needed to be as flexible as possible.

Originally, I had decided to use RenderMan XPU to render entire shots in a few minutes. However, at that time it was almost impossible to buy an RTX card with enough GPU RAM in Europe. Big retailers always said that there were supply problems in the market right now and we should come back next year. So I had to use my 2060- KO RTX card, which has only 6 GB of RAM and a powerful limited edition TU- 104 series GPU. This chip was also used in the 2080 and has slightly more power than the standard 2060. I only needed the card and RTX power in compositing due to our workflow decisions, mainly for denoising all the multipass render layers in Neat Video.


Denoise with Neat Video

In our experience, Neat Video is currently the best denoising tool money can buy. With Neat Video's Temporal Denoiser you can almost completely remove noise from render sequences and even movie footage without getting a blurry image. Simply speaking, each image is just a signal consisting of 3 color curves. Jagged lines in color curves are often image noise, and that can be removed. Nvidia's RTX graphics cards allowed us to denoise 27 full HD layers in real time in DaVinci Resolve 17 Studio. We were also able to upscale the renders to 4K using AI.

I decided to do all the ray tracing in RenderMan RIS. RIS is one of the many render engines that come with RenderMan and is used in almost every blockbuster movie made in the last 30 years. Most prominent examples are Star Wars or even all Pixar movies, and RenderMan was also used in "The Mandalorian" alongside the much praised Unreal Engine.

Finished compositing workflow test rendering in Fusion Studio. The complete image was built up from individual layers.


The image you see above is a RIS image, and the power of RenderMan RIS is that you can render hundreds of additional AOV/render layers for compositing without significantly increasing render time. It's also a great advantage for multipass rendering, since you can essentially change all the shading in the compositing without touching the 3D program again.

This project changed my entire workflow to a more efficient, quality-driven workflow based on RenderMan's multipass rendering and was proof to me that multipass rendering is not dead.

The left image shows the WAR render result – right image the denoised and corrected result.


Fusion

During production, it also became apparent that I would no longer be using Fusion Studio Standalone. I switched to using Fusion Studio integrated with DaVinci Resolve Studio. The integrated version proved to be much more robust and surprisingly more powerful when compared directly to the standalone.


Intel Denoiser

In order to denoise render layers, we also did tests with the Intel Denoiser, which is in the compositing part of Houdini since version 17. The problem with the Intel Denoiser is that it doesn't really work well in terms of temporal denoising. For example, it tends to flicker from frame to frame as it's only designed for still images. However, the Intel Denoiser is a very powerful denoising tool, and I used it mainly for all the subsurface scattering layers. We then ran Neat Video over each image denoised with Intel Denoiser to remove the flicker. The great thing about Intel Denoiser is that, for example, subsurface scattering layers, which can be really extremely noisy, can look almost homogeneous and correct as if calculated with millions of samples. The Intel Denoiser calculates completely new pixels into the synthetically generated image with the help of a CPU learning process. It turns out that this is also the solution for the future until we have something better. I really liked how efficient Fusion became in DaVinci Resolve 17 Studio. I'd also like to mention that Fusion's old Exr multilayer problem is now largely solved: you can load sublayers dynamically, similar to Nuke's shuffle systems.


Data Nightmare: Sand Particles

Some of the shots contained large amounts of sand particles, and they had to match the real photograph as closely as possible. The idea for these shots was to capture an abstract representation of the forging process of the "Dutch Oven" in molded sand. In nature, so-called kinetic sand is used for this purpose. The renderings had to be able to represent the look-and-feel of kinetic sand.

The first idea was to use textures. The problem with textures is that they don't match the camera's close-ups as well as the simulated particles and hardly match the real lighting conditions. It would be very, very hard to match both worlds, texture particles and simulated particles. The solution for these shots was to create a base mesh filled with 180 million particles at render time, plus 40 million simulated particles falling from above. I cached all the simulated particles frame by frame, resulting in a cache size of 288 GB of data. The filled particle tank was a static frame cache.

Render test of the glowing metal stream as well as an alternative version.


The Machinery

Fortunately, I had upgraded my entire workstation prior to the project and had one of the latest Intel 11600k CPUs in my workstation. I mention this because it was revealed last year that RenderMan RIS and the new ILM Lama Shading offered increased rendering performance of up to 20% if the workstation had an AVX512-supported CPU and an AVX512-capable motherboard. After numerous tests, I was able to confirm this claim.

This is a big advantage for large studios as well, since GPU rendering is not yet powerful enough to deliver footage that requires 60 GB of RAM or more during rendering. AVX512 was designed - as far as I can tell - to deliver better performance under heavy memory usage. From the technical side, I'd also like to mention that Intel's CPU cooler cryo-cooling is a game changer, as you only need power, but no air or liquid, to cool the CPU to temperatures below room temperature.


Rendering for the Second Time

Finally, we were able to complete the rendering and simulation of this render shot within 21 hours. It is also worth noting that all shots were rendered with a maximum sample count of 64. Each sand particle is a complete geometric sphere and has a unique random color. The random distribution was in the range of 5% relative to each other. All particles in all shots were rendered using Subsurface Scattering and Refraction Shader. Subsurface scattering and refraction were key to achieving a very realistic rendering.

Throughout production, Pixar's RenderMan 24 delivered predictable, very stable and realistic results. Both the planning of the render time and the final render time required were almost 100% in line with the schedule. The 3D production work I was responsible for was completed from start to finish in 168 hours, which was pretty much on budget.

Illustration of thousands of area lights dynamically generated by Houdini for additional illumination.


Molten Metal

I think the molten metal shot was one of the funnest shots. The most interesting challenge was getting the liquids to flow the way they were supposed to flow in our imagination. The sand particles also played a very big part in this shot. I had to use a camera frustrum culling method because I had to increase the amount of sand particles by a factor of 3, since the camera was passing much closer to the particles.

I sort of reduced the size of the individual sand grains, which we had represented much larger in the other shots, to reduce the amount of data. The entire data set of the sand casting exploded to a rough amount of almost a billion sand particles. The 3 metal streams were also tied to invisible force fields that drove the streams. Realistic animation in the sand didn't work, so I used curve-based volume SDFs to animate the forces so they flowed in front of the camera as desired. This was also necessary because the mold itself had a slope and the fluids tended to flow out of the grooves. So I basically channeled the metal flows with invisible walls.


“Throughout production, Pixar's RenderMan 24 delivered predictable, very stable and realistic results.”


After the simulation, I added some smoothing and shaping operations to eliminate most of the hard edges. Again, using the full range of VDB tools in Houdini. We tried different ideas, the first one being a more realistic look of a freshly melted metal casting, then we added some growing metal patterns, then we went to another version. In reality, the molten metal almost doesn't glow, but the viewer didn't like that. So we went with adding a glowing variant, essentially a mix of reality and what the audience expected to see. Showing glowing metal also gives a sense of warmth in the image.

To support the idea of heat, I simulated several different columns of smoke rising from the molten metal. I also used different Z-buffer layers to deform the image to create a heat flicker. I used the great Axiom Solver for Houdini, developed by Matt Puchala, a former ILM FX TD. My thanks and regards go to him. Axiom Solver is fantastic, fast and works almost perfectly in combination with RenderMan. For the lighting of the shot I used dynamic area lights growing over the metal.

The light emission from the metal was also transferred to these area lights. The intensity of each light was controlled mainly by the nearby emission attributes provided by the fluid.

Driven by a particle simulation, the shader was swapped with blue enamel where the simulation touched the metal. We also mixed between reality and fantasy in this shot.


A Blue Wall of Paint

This shot proved to be the most challenging during the entire production. The original idea of the shot we all agreed on was to have the entire "Dutch Oven" move through a giant wall of blue liquid. The problem with this is that the paint usually sticks to the surface, so almost the entire casserole is covered for a period of time. So the shot was just a blue distracting wall. I don't think I'm kidding when I say that Chris and I spent nights working on this creative theme. I came up with the idea of painting the casserole in different ways: with a swirling stream of liquids, a ribbon of liquids, but all these ideas had problems that were not compatible with what we had envisioned.

In addition, there were technical difficulties related to Houdini itself. Houdini wasn't able to provide sufficient substep behaviors in flip-fluid emitters. Emitting fluids from a particle source in a high-speed shot with 5 to 8 subframes turned out to be a technical challenge.

I investigated the problem, and it turned out that it wasn't my fault, but rather a problem with the solver itself, so the solution was to build a flip-fluid emitter myself using Houdini's own tools.

The two images show the substep emitter particle behavior before and after the intervention in the simulation solver of Houdini.


In short, I averaged the position of multiple substeps over the entire animation. This sounds complicated, but it's basically just a time distribution of the individual emitter particles.

After several iterations, we then decided to change the overall image completely. We wanted to illustrate the process of enameling, so I built a setup that would coat the "Dutch Oven" with enamel. Driven by a particle simulation, the shader was swapped with blue enamel where the simulation touched the metal. We also mixed between reality and fantasy in this shot. The fluid simulation itself was rendered in a separate pass.

In conclusion, it was a pleasure working with FerebeeLane's creative team on this 30-second product film.



About the Artist

 
 
 
 
 
 

NEWS | Check out the RenderMan "SciTech" Art Challenge Winners!