Using MATLAB and TensorRT on NVIDIA GPUs

Originally published at: https://developer.nvidia.com/blog/using-matlab-and-tensorrt-on-nvidia-gpus/

As we design deep learning networks, how can we quickly prototype the complete algorithm—including pre- and postprocessing logic around deep neural networks (DNNs) —to get a sense of timing and performance on standalone GPUs? This question comes up frequently from the scientists and engineers I work with. Traditionally, they would hand translate the complete algorithm…