TensorFlow 2.0 with Tighter TensorRT Integration Now Available

To help developers build scalable ML-powered applications, Google released TensorFlow 2.0, one of the core open source libraries for training deep learning models. In this release, developers will see up to 3x faster training performance using mixed precision on NVIDIA Volta and Turing GPUs. 

TensorFlow 2.0 features tighter integration with TensorRT, NVIDIA’s high-performance deep learning inference optimizer, commonly used in ResNet-50 and BERT-based applications.  With TensorRT and TensorFlow 2.0, developers can achieve up to a 7x speedup on inference.

This update also includes an improved API, making TensorFlow easier to use, along with higher performance during inference on NVIDIA T4 GPUs on Google Cloud.

“TensorFlow 2.0 makes development of ML applications much easier,” the TensorFlow team wrote in a blog post. “With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution, TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers.”

For use in multiple GPUs, developers can use the Distribution Strategy API to distribute training with minimal code changes with Keras’ Model.fit as well as custom training loops, the TensorFlow team said.

If you are looking to migrate from TensorFlow 1.x to 2.0, the TensorFlow team has published a how-to guide to help you migrate here.

Learn more on the TensorFlow Medium blog.