Simulation / Modeling / Design

DLSS: What Does It Mean for Game Developers?

We have all heard a lot about the importance of two emerging technologies for games: real-time ray tracing and AI. The former is easy to grasp immediately… all one needs to do is watch a short demo like Project Sol, and they can see the benefits of the technology straight away. DLSS (Deep Learning Super Sampling), while just as important, is more subtle in displaying its benefits.
We talked to Andrew Edelsten, Director of Developer Technologies at NVIDIA, to get insight into DLSS, and understand why it’s so important to the game development community. 

Question: All right, Andrew, let’s start with defining “Artificial Intelligence” for game development. When most people hear that term, the first thing that comes to mind is probably a virtual brain that drives non-playable characters and competition in a game. But for video games in 2018, what is “AI”, really? How are these techniques “intelligent”?
Answer: The term “AI” is definitely overloaded in game development but in this case, we are referring to “AI” as a traditional computer science term: using a mathematical model to map inputs to outputs by training with examples rather than hand-crafted computer code.

Question: What is DLSS? How does DLSS provide developers with the power of AI?
Answer: NVIDIA Turing and the NVIDIA RTX platform have introduced exciting new ways to combine rasterization with both ray tracing and deep learning (aka AI). DLSS is the first in a new line of techniques that leverages the deep learning and AI aspects of RTX to provide new rendering technology to game developers and gamers.
DLSS makes extensive use of the TensorCores in the Turing GPU to implement a deep neural network (DNN) that offers high-resolution gaming at high FPS and improves image quality to super-sampled levels.


(For more information on DLSS and other deep learning technologies:
DLSS Product Page: https://developer.nvidia.com/rtx/ngx
Deep Learning GTC sessions: http://on-demand-gtc.gputechconf.com/gtc-quicklink/8dTgl


Question: How does DLSS train, and why is it a big jump forward in super resolution and anti-aliasing technology?
Answer: The DLSS model is trained on a mix of final rendered frames and intermediate buffers taken from a game’s rendering pipeline. We do a large amount of preprocessing to prepare the data to be fed into the training process on NVIDIA’s Saturn V DGX-based supercomputing cluster. One of the key elements of the preprocessing is to accumulate the frames so to generate “perfect frames”.
During training, the DLSS model is fed thousands of aliased input frames and its output is judged against the “perfect” accumulated frames. This has the effect of teaching the model how to infer a 64 sample per pixel supersampled image from a 1 sample per pixel input frame. This is quite a feat! We are able to use the newly inferred information about the image to apply extremely high quality “ultra AA” and increase the frame size to achieve a higher display resolution.
To explain a little further, DLSS uses a convolutional auto-encoder to process each frame which is a very different approach than other antialiasing techniques. The “encoder” of the DLSS model extracts multidimensional features from each frame to determine what are edges and shapes and what should and should not be adjusted. The “decoder” then brings the multidimensional features back to the more simple Red, Green, Blue values that we humans can understand. But while the decoder is doing this, it’s also smoothing out edges and sharpening areas of detail that the encoder flagged as important.

Question: Can you walk us through how DLSS will be worked into the developer’s workflow?
Answer: DLSS is a post-processing effect that can be integrated into any modern engine. It doesn’t require any art or content changes and should function well in titles that support Temporal Anti-Aliasing (TAA).
Question: How much work will a developer have to do to continue to train and improve the performance of DLSS in a game?
Answer: DLSS is the first time a deep learning model has been placed directly into a 3D rendering pipeline. This is made possible through heavy use of the Turing TensorCores. But the developer need not worry about the difficulties of getting the DLSS model to run in only a few milliseconds. NVIDIA has had many teams that span NVIDIA Research, our Hardware & Architecture groups as well as many in Developer Technologies working on both image quality and performance and we will continue to improve both over time.
At this time, in order to use DLSS to its full potential, developers need to provide data to NVIDIA to continue to train the DLSS model. The process is fairly straightforward with NVIDIA handling the heavy lifting via its Saturn V supercomputing cluster.
Question: Will consumers be able to see the difference DLSS makes?
Answer: Absolutely! The difference in both frame rate and image quality (depending on the mode selected) is quite pronounced. For instance, in many games that we’re working on, DLSS allows games to jump to being comfortably playable at 4K without stutters or lagged FPS.
Question: How will developers be able to measure the performance difference DLSS enables?
Answer: DLSS provides a new way of achieving higher display resolutions without the cost of rendering every pixel in the traditional sense. DLSS is also flexible enough to allow developers to choose the level of performance and resolution scaling they wish rather than being locked to certain multiples of the physical monitor or display size. As DLSS is a post-processing effect, developers can measure performance using the standard timing tools that they already use.
Question: Another term that’s been used frequently is “deep learning”. What is the difference between “AI” and “deep learning”?
Answer: Generally speaking the two terms are used interchangeably. Technically, deep learning (which uses multiple “deep” layers of neural networks, hence the name) is a subset of Machine Learning which in turn is a subset of Artificial Intelligence.
Question: Future speak: How will AI in game development evolve? How will it change the development process over time?
Answer: Personally, and I think I speak for NVIDIA, AI and deep learning is going to revolutionize all aspects of games and game development.
Thinking only a couple of years into the future, I can see games including all manner of AI based enhancements.

    • Text2speech systems that provide engaging audio for all characters in a game.
    • NPC “chat bots” that access a vast game knowledge base covering the entire game universe.
    • Boss fights that aren’t pre-scripted and adapt to a player’s style.
    • Cheat detection systems that detect game hacks, aimbots, and wallhacks and keep games fair, competitive and fun.
  • DNN models to handle character animation, physics and different aspects of the rendering pipeline.

On the development side, AI will free designers and artists from mundane and repetitive tasks (and we all know there are many, MANY, horribly mundane tasks in game development). In so doing, development teams will be able to iterate on all aspects of games that much more, to hone the key elements of their game, and find the elusive “fun factor” that makes good games great!
Question:  Why is Turing technology necessary to unlock the potential of AI?
Answer: The NGX platform provides a suite of AI based “features” that make heavy use of Turing’s TensorCores. Until the advent of Turing, executing deep learning and AI models of this complexity on consumer level hardware was thought impossible. NVIDIA plans to leverage this amazing new platform to bring many more exciting features to gamers and users everywhere.
You can learn more about DLSS, and sign up for notifications on the availability of the NGX SDK here.

Discuss (0)

Tags