CUDA-X AI includes cuDNN for accelerating deep learning primitives, cuML from RAPIDS.ai for accelerating machine learning algorithms, NVIDIA TensorRT for optimizing trained models for inference, and over 15 other libraries. Together, they work seamlessly with NVIDIA Tensor Core GPUs to accelerate the end-to-end workflows for developing and deploying AI-based applications.
CUDA-X AI is integrated into all deep learning frameworks, including TensorFlow, Pytorch, and MXNet, and leading cloud platforms, including AWS, Microsoft Azure, and Google Cloud.
CUDA-X AI libraries are freely available as individual downloads or as containerized software stacks for many applications from NGC. They can be deployed everywhere on NVIDIA GPUs, including desktops, workstations, servers, cloud computing, and internet of things (IoT) devices.