New Tutorial : Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server

This latest tutorial outlines steps to deploy an open source model with minimal configuration on DeepStream using Triton Inference Server. It also shows several optimizations that you can leverage to improve application performance. Read the blog Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | NVIDIA Technical Blog