May 23, 2025
Created and written by Teoman Şirvancı. Edited by Leif Pedersen.
This nostalgic F1 toy car is nearly 50 years old and belonged to my father as a child. Recreating it digitally was both a way to preserve a personal memory and an opportunity to explore different tools and workflows while creating a meaningful piece for my portfolio.
I used RenderMan for physically accurate shading and lighting. I matched the CG render with real turntable footage one-to-one, combining both through camera tracking and compositing. This approach resulted in a presentation that blends physical nostalgia with digital visual storytelling in a single frame.
The project began with a photogrammetry-based workflow, which allowed me to capture accurate visual reference data from the original toy. After processing the scans, I imported the solved cameras into Maya and used the projected images as precise guides to manually reconstruct the model. This helped me achieve a one-to-one match with the physical object in terms of shape, proportion, and surface silhouette.
The geometry itself was modeled with production-ready standards in mind, prioritizing both structural integrity and shading efficiency. By maintaining clean topology and a consistent edge flow across the entire model, I ensured a stable foundation for the downstream UV layout, texturing, and shading stages that followed.
Seeing the gray material and wireframe side by side reveals both the surface behavior and the underlying structural flow of the model. Using RenderMan’s PxrVisualizer integrator makes rendering this step almost real time.
Gray shader and wireframe
I added subtle dust and dirt buildup using a simple scattering setup in Maya Bifrost.
By distributing a few random geometry pieces across the surface, I was able to enhance the sense of realism and material presence in the scene.
Maya Brifost dust generation
The most important aspect of the UV layout for me was maintaining consistent texel density and ensuring clean transitions between surfaces. I organized the UDIMs based on four primary surface types, which helped simplify the masking and cleanup phases later on.
I packed the UV islands with minimal fragmentation, making sure each area had enough texture space. This approach proved very efficient, both for preserving resolution and reducing visual clutter while painting in Mari. It allowed me to keep the scene lightweight while still achieving sufficient detail in both close-up shots and wide views.
Maya UV layout and UDIM distribution
I used Mari for the texturing process and really enjoyed the workflow, especially the flexibility of its node-based structure, which helped me keep everything clean and organized. Since I used the Physical Specular mode in the PxrSurface material, I prepared my texture structure accordingly. I began by projecting the base color data onto the model, captured using cross-polarization techniques. By applying polarizing filters to both the camera and the light source, I was able to eliminate surface reflections, allowing for more accurate color and cleaner mask extraction.
After projection, I performed extensive cleanup using the Clone Tool to correct imperfections and surface inconsistencies. In areas where no scan data was available, I created the textures myself to preserve the material continuity. To isolate material regions, I relied on the Color to Mask method, which allowed me to generate precise masks directly from the base color information. This enabled a layered approach, where I built up surface detail such as chipped paint, dirt, filler material, and subtle wear.
TIP
Throughout the project, I used ACES 1.2 as the color management system by defining OCIO environment variables. This helped maintain accurate and consistent color across Mari, RenderMan, and every other stage of the production workflow.
I kept the shading process as minimal as possible, connecting the texture maps created in Mari directly into the PxrSurface shader in RenderMan. Apart from a basic remap node used for fine-tuning the roughness response, no complex node networks were built.
This approach worked well because I had also used a PxrSurface-based setup during the texturing phase in Mari, which helped me achieve consistent results in RenderMan.
To ensure physical accuracy, I applied correct IOR and Extinction Coefficient values for each material.
Since all maps were prepared with these values in mind, no further adjustments were needed on the RenderMan side.
Very simple RenderMan for Maya shading network
For the lighting phase, I used a custom HDRI captured from a real-world setup I built specifically for the turntable shoot. After filming the video, I took a 360° HDRI using my phone and processed it into a usable environment map.
A quick behind-the-scenes moment with my dad while setting up the scene.
To better control shadow definition, I added a single spot light in CG that matched the main shadow direction seen in the footage. This allowed me to reinforce the lighting without deviating from the physical reference.
I converted the HDR image to ACEScg inside Nuke before using it in RenderMan. This ensured proper color management across the pipeline and allowed the lighting to behave predictably in CG.
Lighting rig in RenderMan for Maya
Before integrating the HDRI into the scene, I performed a tone-matching step in Nuke using the mid-gray value from the checkerboard captured in the footage. This helped align the HDRI’s overall light intensity with the real-world setup, ensuring that lighting direction and energy levels in RenderMan remained both visually and physically accurate.
Tone mapping the HDRI in Nuke
Before starting the rendering process in Maya, I first needed to match the object’s rotation to the real turntable footage. Using motion tracking in Nuke, I applied a bandpass filter to isolate the platform's movement and manually tracked key points to identify its rotation axis though the data was limited to 2D space.
After importing the data into Maya, I manually adjusted the scale and orientation to align the physical turntable with the digital scene. By projecting the 2D tracks onto a 3D plane, I was able to accurately synchronize the object’s rotation in CG with the filmed footage.
After aligning the CG object with the footage, I prepared the scene for the final render. To capture the character’s shadow on the ground without affecting the alpha, I used a Holdout plane. This made it easier to integrate the render into the background plate.
Before rendering, I ran several test frames to ensure that depth of field (DOF) and motion blur were behaving correctly, by comparing the sharpness with the reference footage. The final render was done using RenderMan RIS.
For compositing flexibility, I exported a set of AOVs using Light Path Expressions (LPE), including diffuse direct, diffuse indirect, specular direct, and specular indirect. I also used Cryptomatte for fast and accurate masking during compositing.
To reduce sampling noise, I enabled the RenderMan Machine Learning Denoiser, which noticeably improved the clarity of the output while keeping render times low.
Render settings showing RenderMan’s motion blur, ACES OCIO, and LPE AOV output configurations.
This project marked my first experience with node-based compositing in Nuke.
To keep the pipeline consistent, I worked within an ACES color workflow, configuring OCIO in Nuke to convert sRGB footage into ACESCG.
I combined AOVs using shuffle nodes and used Cryptomatte to isolate specific surfaces where I enhanced or reduced reflections based on lighting and composition.
AOV shuffle and Cryptomatte
Some background elements required manual rotoscoping to separate them cleanly. For the breakdown, I also included UVs, wireframes, and checkerboard overlays. The TV screen, which I modeled and rendered separately, was composited directly into the final shot in Nuke.
... and here we have the final image!
This project was a personal journey for me. Don’t forget to download the project and take a closer look. I hope you enjoy it as much as I did. Have fun!
Teoman Şirvancı is an artist specializing in hard-surface modeling and look development. He is currently building his portfolio for film and visual effects, and aims to take his first steps toward a career in the industry. Inspired by storytelling and the creative aspects of visual design, he’s excited to grow as an artist and collaborate with creative teams.
This project is available with a Attribution-NonCommercial 4.0 International License. This allows you to share and redistribute for non-commercial purposes, as long as you credit the original authors.