This latest tutorial outlines steps to deploy an open source model with minimal configuration on DeepStream using Triton Inference Server. It also shows several optimizations that you can leverage to improve application performance. Read the blog Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | NVIDIA Technical Blog
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Like to know more bout how to deploy custom model (tensorflow model zoo) on DeepStream? | 2 | 737 | August 28, 2021 | |
Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | 3 | 8913 | February 29, 2024 | |
Support for Triton Inference Server on Jetson NX | 2 | 885 | November 2, 2022 | |
Simplifying AI Model Deployment at the Edge with NVIDIA Triton Inference Server | 0 | 473 | September 14, 2021 | |
Problems running the Tensorflow Model Zoo example using Triton | 3 | 624 | July 20, 2022 | |
Industrial defect detection | 2 | 409 | October 12, 2021 | |
How to deploy mxnet/ Tensorflow models with deepstream5? | 0 | 502 | September 6, 2020 | |
Q&A for the webinar “Create Intelligent Places Using NVIDIA Pre-trained vision Models and DeepStream SDK” | 2 | 646 | September 5, 2020 | |
Optimizing and Serving Models with NVIDIA TensorRT and NVIDIA Triton | 1 | 387 | July 20, 2022 | |
Is there a suitable version of triton server for jetson xavier (arm)? | 2 | 689 | November 3, 2021 |