This latest tutorial outlines steps to deploy an open source model with minimal configuration on DeepStream using Triton Inference Server. It also shows several optimizations that you can leverage to improve application performance. Read the blog Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | NVIDIA Technical Blog
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | 3 | 8957 | February 29, 2024 | |
| Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | 13 | 1293 | May 25, 2022 | |
| Like to know more bout how to deploy custom model (tensorflow model zoo) on DeepStream? | 2 | 778 | August 28, 2021 | |
| Using nvinfer(TensorRT) or nvinferserver on deepstream | 4 | 1054 | October 5, 2021 | |
| Optimizing and Serving Models with NVIDIA TensorRT and NVIDIA Triton | 1 | 426 | July 20, 2022 | |
| How to run custom TensorFlow model in deepstream pipeline? | 4 | 1726 | October 12, 2021 | |
| Triton Server Integration with DeepStream | 3 | 788 | October 12, 2021 | |
| Deploying AI Deep Learning Models with NVIDIA Triton Inference Server | 0 | 427 | December 18, 2020 | |
| Inference multiple deeplearning model on the same camera source | 8 | 630 | October 12, 2021 | |
| tesla v100 and test out inference - please give me input | 5 | 719 | October 12, 2021 |