Path Tracing
—For the Rendering master's course at TU Vienna (2025SS), the assignments iteratively built up a full path tracer from scratch, starting with direct illumination only and ending with recursive and iterative path tracing, a BVH acceleration structure, multiple BSDFs, NEE, and MIS. I worked on it alone, got a Sehr Gut, and placed 2nd out of all final renders that semester. You can find my final render in the Hall of Fame. Due to copyright reasons I cannot share the code, so here are some renders instead. All rendering is CPU only, without GPU acceleration.
What I Learned
This course gave me a solid grounding in the rendering equation and the math behind it. A few things really clicked during it. The change of variables and what it actually means geometrically. How MIS works and why it helps so much in scenes with both small bright lights and large diffuse ones. And probably the biggest conceptual unlock: that you can make almost any random choice in a Monte Carlo estimator and it stays unbiased, as long as you divide by the probability of that choice. Once that clicked, a lot of other techniques started making sense.
The course also covered biased methods like photon mapping, which was interesting to contrast against the unbiased approach and think about when the tradeoff is worth it.
Renders
Final Render
The final scene shows a vase of roses on a table, tipping over and splashing water. The only light source is a window in the background, partially obscured by curtains. The room has a rug, an armchair, and a clock. Most of the work went into making the roses look convincing. The petals and leaves use a single-scattering approximation for subsurface scattering, which works well since they are thin enough that full SSS would be overkill. I also used a sheen BSDF for the fabric and the fuzz on the roses. The fluid simulation took many hours of refinement to get right.
Path Guiding
I also implemented path guiding, based on two papers: "Practical Path Guiding for Efficient Light-Transport Simulation" by Müller et al. and its follow-up "'Practical Path Guiding' in Production". Path guiding tries to learn where light comes from as rendering progresses, and then biases new samples toward those directions to reduce noise. The spatial structure is a binary tree that subdivides wherever many ray hits accumulate. Each leaf of that tree holds a quadtree that records the incoming radiance distribution, with the axes mapping to polar angles. The quadtree refines itself each learning iteration, splitting nodes until their stored energy falls below a threshold. The whole structure is trained over several rounds, starting at 1 sample per pixel and doubling each iteration.
The implementation works, but performs worse than the standard path tracer on my test scene. I think I understand why. The scene has a small light source hidden behind two offset walls, each with a small hole, and the walls are not axis-aligned. Many spatial nodes end up straddling a wall, which means a single quadtree is being trained from two essentially uncorrelated sides of the same surface, so the directional distribution it learns is meaningless. A path guiding implementation that splits spatial cells based on directional variance rather than hit count would likely handle this better, but I did not have time to pursue that. The original paper's authors also note that their method increases noise in certain scenarios, so at least I am in good company.