New DLI Hands-On Course Shows How to Optimize and Deploy TensorFlow Models with NVIDIA TensorRT

The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and accelerated data science. Developers, data scientists, researchers, and students can get practical experience powered by GPUs in the cloud and earn a certificate of competency to support professional growth. Now, as developers continue to work remotely, NVIDIA is offering new online courses they can take from the comfort of their homes. 

In this new course, Optimization and Deployment of TensorFlow Models with TensorRT, developers can learn how to optimize TensorFlow models to generate fast inference engines in the deployment stage.

In the course, developers will learn how to optimize TensorFlow models for more performant inference with the built-in TensorRT integration, called TF-TRT. By the end of the course they will be able to:

  • Optimize Tensorflow models using TF-TRT
  • Increase inference throughput without meaningful loss in accuracy by using TF-TRT to reduce model precision to FP32, FP16, and INT8
  • Observe how tuning TF-TRT parameters affect performance

The course requires competency in training deep learning models in Python, as well as a nominal fee. 

Enroll Now>


Visit the NVIDIA DLI homepage for a full list of hands-on training available remotely.

Tags: