With the immense amounts of data required to train neural networks effectively, it’s critical to know how to leverage the power of multiple GPUs. The NVIDIA Deep Learning Institute (DLI) launched a new online course today in collaboration with Uber called Deep Learning at Scale with Horovod. The course teaches you how to scale deep learning training to multiple GPUs using Horovod, an open-source distributed training framework originally built by Uber and hosted by the LF AI Foundation.
Through hands-on, self-paced learning powered by GPU-accelerated workstations in the cloud, you will:
- Complete a step-by-step refactor of a Fashion-MNIST classification model to use Horovod and run on four NVIDIA V100 GPUs
- Understand Horovod’s MPI roots and develop an intuition for parallel programming motifs like multiple workers, race conditions, and synchronization
- Use techniques like learning rate warmups that greatly impact scaled deep learning performance
At the end of the two-hour course, you’ll be able to use Horovod to effectively scale deep learning training in new or existing code bases.
It’s easy to get started – simply enroll in the course and create or log into your NVIDIA Developer Program account. All you need is your laptop and internet connection.
Interested in learning more about scaling AI training? Check out our instructor-led workshop on Fundamentals of Deep Learning for Multi-GPUs, which teaches you how to parallelize training of deep neural networks using TensorFlow. Our instructor-led workshops are designed for teams of developers and led by DLI-certified instructors. Request a workshop at nvidia.com/requestdli.