RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Originally published at: RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud | NVIDIA Technical Blog

Once you have built, trained, tweaked and tuned your deep learning model, you need an inference solution that you need to deploy to a datacenter or to the cloud, and you need to get the maximum possible performance. You may have heard that NVIDIA TensorRT can maximize inference performance on NVIDIA GPUs, but how do…