At SIGGRAPH 2019, NVIDIA explained how to harness massively parallel path space filtering in order to achieve high quality real-time ray traced global illumination in games.
Imagine seeing a point on the ceiling through a mirror. With offline rendering, you could collect the radiance over a hemisphere to determine how bright it is there. This is expensive. It’s called splitting, and it multiplies the number of rays that you split from a single point. This is not affordable in real time ray tracing.
So… how do you get workable results in a real-time setting? Look at all the paths that you shoot from the eye into the scene. Instead of splitting the ray off, you can share information with neighboring rays.
Using a hash map, you can greatly improve the performance of path space filtering, yielding a dramatically better visual quality at interactive frame rates.
“Instead of calculating one average per vertex, you can calculate one average per cell. And that is super cheap, because each hit point is touched only once. You average in the cell. And then to render an image, you just query the cell. So it’s linear complexity in the number of pixels,” explained NVIDIA’s Alex Keller. “The principle is: we’re sharing with neighbors, so are avoiding range search, which allows us to get faster.”
The video below is a brief excerpt of a full thirty-minute talk, entitled “Ray-Traced Global Illumination for Games: Massively Parallel Path Space Filtering.” It can be found here. The full talk provides more useful details, such as how to use jitter in order to resolve quantization artifacts.
You can also read the full paper, “Massively Parallel Path Space Filtering”, by Nikolaus Binder, Sascha Fricke, and Alexander Keller (in PDF format) here.