Computer Vision / Video Analytics

NVIDIA at TensorFlow World 2019

From TensorFlow 2.0 and TensorRT, to using automatic mixed precision for better training performance, to running the latest ASR models in production on NVIDIA GPUs, learn how NVIDIA GPUs and TensorFlow are helping developers dramatically accelerate their AI-based applications. 

From October 28-31, Join NVIDIA at TensorFlow World 2019 at Santa Clara, California for insights and hands-on training on the latest GPU optimizations in TensorFlow.

Here’s just a few of the sessions you can attend at TensorFlow World 2019 highlighting GPU-based solutions:

Keynote

Accelerating TensorFlow for Research and Deployment

10/31 9:25am – 9:30am

NVIDIA VP of Deep Learning Software, Ujval Kapasi, presents how machine learning on NVIDIA GPUs allows developers to solve problems that seemed impossible just a few years ago. Ujval will explain how software and hardware advances on GPUs are impacting development efforts across the community, both today and in the future.

Add to your calendar

Hands-On Tutorial

Accelerating training, inference, and ML applications on NVIDIA GPUs 

10/29 1:30pm – 5:00pm

In this hands-on tutorial session, we’ll dive into techniques to accelerate deep learning training and inference for common deep learning and machine learning workloads. You’ll learn how DALI can eliminate input/output (I/O) and data processing bottlenecks in real-world applications and how automatic mixed precision (AMP) can easily give you up to 3x training performance improvement on Volta GPUs. You’ll see best practices for multi-GPU and multi-node scaling using Horovod. We’ll use a deep learning profiler to visualize the TensorFlow operations and identify optimization opportunities. And you’ll be able to deploy these trained models using INT8 quantization in TensorRT (TRT), all within new convenient APIs of the TensorFlow framework.

What you’ll learn

  • Discover components from NVIDIA’s software stack to speed up pipelines and eliminate I/O bottlenecks
  • Learn how to enable mixed precision when training models and use TensorRT to optimize your trained models for inference

Add to your calendar

Sessions

Faster inference in TensorFlow 2.0 with TensorRT

10/30 11:50am – 12:30pm

TensorFlow 2.0 is tightly integrated with TensorRT and offers high performance for deep learning inference through a simple API. We’ll use examples to show how to optimize an app using TensorRT with the new Keras APIs in TensorFlow 2.0. We’ll also share tips and tricks to get the highest performance possible on GPUs and detail examples of how to debug and profile apps using tools by NVIDIA and TensorFlow. You’ll walk away with an overview and resources to get started, and if you’re already familiar with TensorFlow, you’ll also get tips on how to get the most out of your application.

What you’ll learn

  • Discover the latest and greatest in the integrated solution, workflows and tools for profiling, and tips and tricks to squeeze the most out of your inference solution

Add to your calendar

Running TensorFlow at scale on GPUs

10/30 11:50am – 12:30pm

We will talk about our experience running TensorFlow at scale on GPU clusters like the DGX SuperPod and the Summit supercomputer. We will also discuss the design of these large scale GPU systems and how to run Tensorflow at scale using BERT and AI+HPC applications as examples.

Add to your calendar

Speech recognition with OpenSeq2Seq

10/31 2:30pm – 3:10pm

Automatic speech recognition (ASR) is a core technology to create convenient human-computer interfaces. But building ASR systems with competitive word error rate (WER) traditionally required specialized expertise, large labeled datasets, and complex approaches.

In this session, we’ll dive into how end-to-end models simplified speech recognition and present Jasper, an end-to-end convolutional neural acoustic model, which yields state-of-the-art WER on LibriSpeech, an open dataset for speech recognition. We’ll also explore its implementation in the TensorFlow-based OpenSen2Seq toolkit and how to use it to solve large vocabulary speech recognition and speech command recognition problems. 

Add to your calendar


Discuss (0)

Tags