Combining DeepStream pipeline branches

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.2 (docker image)
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12

Hello, I’m constructing a DeepStream pipeline with the following branches:

branch 1: PeopleNet + NvDCF
branch 2: PeopleNet + NvDCF + blood classifier
branch 3: PeopleNet + NvDCF + Gender classifier
branch 4: PeopleNet + NvDCF + ReID
branch 5: PeopleNet + NvDCF + ink/no ink classifier

Since “PeopleNet” and “NvDCF” are common among all the 5 branches what would be the most optimal pipeline design?

One possible solution that I’m thinking would be to use: tee → (nvstreamdemux → nvstreammux)*, right after PeopleNet and NvDCF plugin.

*for each branch

Is it also possible to use Triton server?

Thanks.

  1. dose every branch has the same inputs? if yes, deepstream can support back to back classifiction model, please refer to deepstream-test2.
  2. deepstream nvinferserver plugin will use trition server to do inference, deepstream-test2 also supports nvinferserver.

Each branch will support different input streams. The “PeopleNet” and “NvDCF” tracker are common among all the branches.

For this scenario what would be the most optimal pipeline design?

in theory, you can design pipeline like this:


please refer the similar application deepstream_parallel_inference_app, the difference is that the each branch in this application has own special inference. you can compare the design graph in readme.

Thanks for validating the approach. Just thinking out loud - is there a way I could use Triton ensemble for this?

deepstream nvinferserver plugin will use trition lib to do inference, deeptream-test2 and deepstream_parallel_inference_app support to swith between nviner and nvinferserver.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.