Simulation / Modeling / Design

Ray Tracing From the 1980’s to Today An Interview with Morgan McGuire, NVIDIA

Morgan McGuire, a member of the NVIDIA research team, walks us through the history of shaders, rasterization, and finally, ray tracing.

Q: The games industry has been working on developing a ray tracing solution for decades. Can you give us an overview of the history?

A:  It’s really interesting to look back at the late 80’s. That was when ray tracing and artificial intelligence research really took off. At the time most of us believed that ray tracing and neural nets – which is the technology that underlies deep learning- would be the future. But the problem was that we needed about a factor of a million more processing power than the CPUs of the day in order to make that technology useful for consumers. It took several decades to finally get to the point where we have GPUs that close that gap and let us make these ideas from the 80’s a reality.

The Turing launch is exciting because it’s the first time that ray tracing and neural net technologies have been scaled to work in mass-market applications such as video games.Turing is the latest, greatest architecture for today’s games, but it’s a lot more than that; it’s actually the beginning of a new era of computing. This is why you’re starting to see hints of new game features that seem almost unbelievable. They’re really based on a totally different framework.

Q: Moving from the GeForce 3 era to the 1080ti era, what was learned during that time frame?

A: The GeForce 3 marked the advent of consumer programmable shading. It took about twenty years to really max out what we can do with pixel shading, specifically. The first edition of a book called “Real-Time Rendering” was a slim 150 pages. The latest – the fourth edition – is now 1174 pages! The text went from explaining a couple of knobs and levers you could move, to articulating an incredible array of insider knowledge.

There are some “looks” that we’ve been able to get through tremendous effort – thanks to engineers and artists working together to figure out how to fake the effects that we want cinematically on top of these legacy rasterization-only engines. And that’s exactly what film was doing around 2000. Film has always been a little bit of ahead of games on the tech curve, because they didn’t need to worry about performance at quite the extreme level. When we got programmable shading in games – they’d already had it for a while. Then they got ray tracing, and we had to wait another 20 years for that. In 1998, for example, A Bug’s Life by Pixar was using a film pipeline that looks a lot like pixel shader game pipelines with first-generation RTX today.

Prior to 2018, it was been impossible for game developers to integrate ray tracing at runtime. The performance just wasn’t there, even though we all wanted to be able to cast rays. Now, with the introduction of RTX, we are witnessing the transition from doing everything with a complicated pixel shader pipeline, to having a simpler pixel shader pipeline for us to get the real-time performance, and then layering on ray tracing to get those sophisticated effects in a robust way.

Of all the visual effects this unlocks, global illumination is one effect that just feels like magic. We all know that a AAA game development team can give you a screenshot that looks like a movie using any rasterization engine. And they do, for the marketing shots. But as developers and players, we know that the pictures on the back of the box are the best the game could possibly look, but they don’t really reflect the experience when you’re in the game. They are showing you the aspirational image.

The core technology of ray tracing ensures that the game always looks like those best-possible images. It makes it possible for a development team to give you great visuals without hacking every single scene for the glory shot, but just letting the artists set up scenes and having the renderer do its job from any viewpoint.

Q: For a long time, I’ve heard people say they are waiting for games to look as good as Pixar movies. What they really are asking for is ray tracing, right?

A: Absolutely. And it’s not just getting rid of the shadow aliasing so you get the sharp shadows, or getting real offscreen reflections… it’s not any one effect. It’s when all of that comes together and everything just looks right and reacts correctly. That’s when you start to get that that magical experience that matches the movie experience. It’s just bulletproof. Pixar’s films have evolved a lot since Toy Story. But every feature, and every short film they make is so consistent that even the oldest films have a robustness we couldn’t achieve in games before.

Now, for film, you could cheat a lot because you knew where the camera is and you knew exactly what’s going to happen in a scene. For games, we don’t have that luxury. Ray tracing gives us the robustness for a good art team to get that “Pixar” level of consistency out of their engines, while fixing many of the quality limitations of our real-time visual effects.

Q: So Morgan, let’s talk about how ray tracing can help developers with baking.

A: That’s one of the most exciting benefits for developers. Using ray tracing to speed up baking goes beyond what happens in the runtime. It is relevant for a DXR title, but also when shipping a console game, or when working on a PC game that supports the full range of DX11 and DX12 GPUs.

In the past, an artist would move a light, and then they’d have to wait five or ten minutes to see the lighting update. And you can imagine how terrible your workflow is if you have to light twenty levels, and every time you make any changes to any one, you have to go off and get a cup of coffee, because nothing’s going to be happening on screen for a while and you can’t touch the scene again until the lighting bake finishes.

I’ve been working closely with the Frostbite game engine team at EA, and they were describing to me how their new global illumination preview tools work for artists. With clever use of ray tracing, they’ve gotten to the point where than can prioritize surfaces in the viewport, and then they update almost in real time. You move a light, and almost immediately you see the new lighting solution.

This means a process that used to be, “move something and wait”, where maybe you got fifty changes you can try a day, is suddenly continuous and you’re making thousands of changes. The process of lighting a game can be as interactive as the process of playing the game, and that opens up huge productivity benefits. The lighting director or artist will find that all of their time in front of the machine is useful. They are no longer constantly waiting on computation; they can try much more complicated lighting solutions, because they can actually experiment. It’s like it is in the real world on a film set, where you just go out and grab a light and move it around. But even better, it’s without any real-world limitations. You can grab the sun and move it.

Most games today are limited by artists’ time. There are only so many highly trained, qualified lighting artists in the world. And getting the lighters able to work efficiently is really the key towards making games look better. Letting the texture artists see their work with global illumination, and the environment artists use light as part of their palette.

Q: When do you think all games will feature ray tracing, regardless of platform?

All of the vendors have done a really good job of introducing high performance parallel ray tracing solutions. There is Intel Embree, there’s Radeon Rays, and there’s NVIDIA Optix. Once we had gotten to the point where everybody in the field agreed this was an important technology, we just needed a little bit more hardware performance and a standard.

Microsoft came out with the DXR API. This gave us a standard to build to. And just like DirectX, this is a way of making it so that all the vendors can give an implementation that’s best for their hardware, but the game developers can write to a single API. And then as that becomes adopted by game engines, you won’t even have to directly use the DXR API. You can work at a high level and then compile something out that will run on DirectX and Vulkan and all the different platforms.

Battlefield V shipping in November–that’s very exciting, because that is the first title that really deeply uses hardware-accelerated ray tracing. It’s the Halo or DOOM or PONG of the ray tracing era. That’s the game that makes clear that everything is going to be different now and the bar has been raised.

Of course, we will eventually get to the point where it’s not just the biggest franchise from the biggest publisher using ray tracing; all games everywhere will be ray traced. It’s like shadows; modern games are just expected to have shadows. You don’t advertise, “this game has shadows!”.

So then, when do we get the point where you don’t bother telling the player that a game has ray tracing, because of course it does?  I think there’s a lot of pieces that have to move into place for that. Microsoft coming out big in support of ray tracing, standardizing it, and then Khronos coming along doing the same thing for Vulkan, I think those are the first steps. It has to lead with the APIs, and then we need hardware support across all the platforms.

So, at NVIDIA we’re all rooting for our colleagues at other hardware companies to support this technology. I think we’ll see all the hardware vendors shipping solutions for this, maybe in the next year or two, maybe five years, depending on how far along they are in their hardware implementations. And probably the sooner that happens, the better for everyone involved. We are really excited about our own implementation a ray tracing, but we want everybody to be there. We want all GPUs to be supporting ray tracing across the industry.

Q: I think ultimately everyone will support ray tracing, but right now the ray tracing audience is as small as it’s ever going to be. So why should people start investing in putting ray tracing to their pipeline now, instead of just waiting for the audience to grow?

A: So, when programmable shading first shipped for consumer GPUs around the year 2001 in the GeForce 3, there was no performance advantage to using it. In fact, it was a performance regression in if you tried to use the programmable pipeline to emulate the fixed function pipeline. It would actually be slower.

The reason to use first-gen programmable shading was not speed – it was to make your game look better. You could suddenly access new looks that no one else could achieve with fixed function. There was a really big turnover in the industry it at that point. One of the companies that adopted programmable shading really aggressively was a small shareware company called EPIC Mega Games that was iterating on sequels to their Unreal video game.

We now think of Epic and Unreal Engine as dominant in the industry. That happened because they were an early adopter of programmable shading. They wanted to be on the bleeding edge, even if it was going to take a while for the market penetration to get there. Clearly, that approach paid off. The companies that waited too long – the ones that said, “No, we don’t need programmable shading, fixed function is fast enough for us, we’re happy with the way our games look,” those companies don’t exist anymore. And the graphics card companies that didn’t move to programmable shading don’t exist anymore. We now have AMD and NVIDIA, and Intel on the embedded side, because those are the ones that went all-in on programmable shading.

I think it’s important to look at ray tracing the same way: there’s a there’s a small market today in terms of the deployed base. And we’re starting to see some exciting titles that support ray tracing.

But if you take a wait-and-see approach, you risk falling far behind your competitors, at a time when customers are expecting ray-traced visuals in all of their games. And that’s risky, because it can take a few years for a studio to get proficient at using this technology.

I just think that the companies that adopt early are going to be the ones who win, and who grow in the new era of ray tracing (and machine learning). And the ones who try to cling to old technology are going to have a really hard time. So, I want to see all the great game companies out there all move forward with us into this era together.

Q: Does everyone agree that ray tracing is the future of in-game graphics?

A: Pretty much every graphics textbook ends saying that the future is going to be ray tracing, we just don’t know when it’s going to happen. So, it’s definitely been known in the high-end academic side for awhile. We were just waiting for hardware. We saw films switch over… and when film switched to ray tracing it was about ten years until there was no rasterization left in the film industry. Ten years for them is probably three years for us. I don’t think that rasterization will go extinct in games. Rasterization is an important technique for games, maybe forever. So, we’ll have hybrid rasterization plus ray tracing. But the point I want to make is that every VFX company that survived and prospered in film embraced ray tracing early.

With the programmable shading model and with the film model, I think we’ve seen two really good recent examples of why it’s critical to jump on any flexibility in the GPU and any new program model early. That’s the way to move forward.

Ray tracing is THE big one. We haven’t done anything this big as a field in 20 years, so it’s easy if you’ve only been in the field for 10 years to say, “this is a fad”. It’s not. Something like this happens roughly once every handful of decades in the field. I’m excited this is happening now, and this is the real deal. Everybody needs to jump on this now.

Q: Would you compare the impact of ray tracing to any past game technology innovation?

A: DOOM was the first big ray-cast consumer game. It opened people’s eyes. You could just barely push real-time 3D with some heavy constraints. Quake was the first game that showed that you could do true 3D in terms of unconstrained camera position and actual polygons. And the Quake series was also the first to move to OpenGL from software rendering.

Game graphics were never the same after id hit the scene. There was no going back to 2D after DOOM came out. And then after Quake, there was no going back to constrained 3D. Everything had to be real 3D, real lighting. And funny enough, the Quake series actually introduced ray tracing to games. It was done at development time offline. So, Quake III had ray traced lighting, what was called “radiosity”, with lightmaps. And that introduced the ray tracing “look”.  Quake III era-stuff has finally come home to the point where we can do it all in real time, instead of as an offline pre-process.

I think in each case, once consumers saw what the technology could do for the visuals, it was impossible to go back. Ray tracing is that. There’s no going back once once you’ve seen it. You understand the quality and you just want to stay there.

Q: That’s true, but it does come at a cost. Why would somebody want to turn on ray tracing when that might mean that they cannot run their frame rate and their resolution in ultra settings?

A:  From the gamer’s side, there are different categories of players. You might be in a different category at different times, depending on what type of game you’re playing.

For instance, if you’re playing a title like Shadow of the Tomb Raider, you’re playing for the experience, as opposed to any competitive context. You can run at 60 hertz and enable the best possible visuals. You’re essentially getting a cinematic visual experience, which is very different than if you’re playing Overwatch competitively, and you’re running on a 240 hertz monitor, and you’re tweaking all the settings for performance. In the eSports case, you don’t really care about the cinematic experience, you care about your performance as a player.

So I think at different times people are going to come down in different places on where they want the settings. And there will come a point – I don’t think the first generation of ray traced titles are there – where you don’t want to turn off ray tracing because it’s giving you additional cues that are important to gameplay. You need to see the reflection of the explosion around the corner or the enemy’s shadow to know not to go into that alley. You’re watching the overtaking car reflected off your Corvette’s body work or hiding in pixel-perfect shadows.

Going forward, we’re pushing aggressively towards offering higher framerates, trying to still let you get towards those kind of ultra modes without sacrificing peak performance when you have a flagship GPUs.

Today, there is this real trade off of frame rate, resolution, and visuals. Competitive players need to be at one end of that, and if you’re playing for fun and the experience on a mid-range PC rig you have to balance these.

Balance intelligently, though. We’ve been trained as players over the last decade to just say, “I only care about higher resolution and faster frame rate”. But honestly, when you’re at 144 fps at 1080p it is kind enough for most games actually, and many players can’t actually tell the difference when they’re playing between 4K, 1080, or 1440 for anything except the HUD text. You probably watch Netflix at 720p 30fps and it looks great. So honestly, it’s not just about the numbers. When you’re not playing solely competitively, you have to step back and ask “am I seeing on-screen the game that I was paying for?” Once you’re at a good frame rate, it is probably better to crank up the visual quality and see the game the way that the developer intended it. You don’t really want 240 Hz or 4k bragging rights with a low-quality image. You want the game that looks gorgeous and runs just fast and sharp enough that you don’t notice pixels or frame rate.
A

About Morgan McGuire

Morgan is a Distinguished Research Scientist at NVIDIA working to create new experiences through hardware and software innovation. He’s the coauthor of The Graphics Codex, Computer Graphics: Principles and Practice, and Creating Games and contributed to the Skylanders, Call of Duty, Marvel Ultimate Alliance and Titan Quest series of video games. He holds faculty appointments at Williams College, the University of Waterloo, and McGill University and received a Ph.D. from Brown University.

Discuss (0)

Tags