Running Re-ID network in Triton

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version nvcr.io/nvidia/deepstream:7.1-gc-triton-devel
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi all,

I am currently investigating the nv-tracker and related features. I am wondering if there’s option to use the re identification network inside Triton instead of a single instance of trt engine. I am running multiple deepsteam application that will use the same re-id network, and it is much more efficient to run it with triton with cuda buffer sharing.

Many Thanks!

nvtracker don’t support Triton backend inference currently. Can you share your use case and the pipeline in your project? We can check if we can optimize the pipeline for your use case.

Basically I have multiple slightly different pipeline that all use the same Re-ID model provided by deepstream. Say I have 3 different pipeline, and current way only support create 3 Re-ID trt engine, which is super inefficient. I may create my own custom low-level-library to accommodate for this at the moment.

Here is the guide for implement custom low level tracker library: Gst-nvtracker — DeepStream documentation

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.