nohara
June 16, 2023, 5:06am
1
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.2 (docker image)
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
Hello, I’m constructing a DeepStream pipeline with the following branches:
branch 1: PeopleNet + NvDCF
branch 2: PeopleNet + NvDCF + blood classifier
branch 3: PeopleNet + NvDCF + Gender classifier
branch 4: PeopleNet + NvDCF + ReID
branch 5: PeopleNet + NvDCF + ink/no ink classifier
Since “PeopleNet” and “NvDCF” are common among all the 5 branches what would be the most optimal pipeline design?
One possible solution that I’m thinking would be to use: tee → (nvstreamdemux → nvstreammux)*, right after PeopleNet and NvDCF plugin.
*for each branch
Is it also possible to use Triton server?
Thanks.
nohara
June 20, 2023, 4:24am
4
Each branch will support different input streams. The “PeopleNet” and “NvDCF” tracker are common among all the branches.
For this scenario what would be the most optimal pipeline design?
fanzh
June 20, 2023, 6:58am
5
in theory, you can design pipeline like this:
please refer the similar application
deepstream_parallel_inference_app , the difference is that the each branch in this application has own special inference. you can compare the design graph in readme.
nohara
June 20, 2023, 7:26am
6
Thanks for validating the approach. Just thinking out loud - is there a way I could use Triton ensemble for this?
fanzh
June 20, 2023, 7:39am
7
deepstream nvinferserver plugin will use trition lib to do inference, deeptream-test2 and deepstream_parallel_inference_app support to swith between nviner and nvinferserver.
system
Closed
July 4, 2023, 7:39am
8
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.