Using MATLAB and TensorRT on NVIDIA GPUs

MathWorks recently released MATLAB R2018b which integrates with NVIDIA TensorRT through GPU Coder. With this integration, scientists and engineers can achieve faster inference performance on GPUs from within MATLAB.

The high-level language and interactive environment MATLAB provides enables developers to easily create numerical computations and algorithms with various visualization and programming tools.

A new technical blog by Bill Chou, product manager for code generation products including MATLAB Coder and GPU Coder at MatWorks, describes how you can use MATLAB’s new capabilities to compile MATLAB applications into CUDA and run on NVIDIA GPUs with TensorRT. The example workflow includes compiling deep learning networks and any pre- or post processing logic into CUDA, testing the algorithm in MATLAB, and integrating the CUDA code with external applications to run on any modern NVIDIA GPU.

When using TensorRT with GPU Coder for a simple traffic sign detection recognition (TSDR) example written in MATLAB, the team noticed 3x higher inference performance with TensorRT running on Titan V GPUs compared with a CPU-only platform.

CPU GPU with cuDNN GPU with TensorRT (INT8)
Execution time (s) 0.0320s 0.0131s 0.0107s
Equivalent images/sec 31 76 93

Visit the original post on the NVIDIA Developer blog and download a free 30-day trial of MATLAB today.

Read more>