Inquiry about deepstream parallel inference sample app

Deepstream Version 7.0

Hi I was dabbling with the deepstream_parallel_inference_app while referring to this link: deepstream_parallel_inference_app/README.md at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub

More specifically, I was trying out the bodypose_yolo_win1 example: deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo_win1/source4_1080p_dec_parallel_infer.yml at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub

I wanted to show all 4 sources instead, so I commented out show source: 2 in line 48 under [tiled-display].
I also wanted inference to be performed on all 4 sources, so under [branch0] and [branch1], I changed src-ids: to src-ids: 0;1;2;3 for both (lines 209 and 246.

However, this is what I got for my output:

It seems that the object detection model is not inferring on sources 2 and 3, while the bodypose is not inferring on sources 0 and 3.

Regarding this, I have a few inquiries:

  1. Am I changing the correct configs? I’m under the impression that src-ids under the [branch] group is what I’m supposed to change.

  2. Is this considered 1 or 4 sources? deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo_win1/sources_4.csv at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub

  3. I noticed that this sample app does not send print information regarding FPS in the terminal. I referred to the link below to print out the statistics, but it seems like the FPS is capped at 30 (source FPS), although I can’t be completely sure. I was wondering whether there is a more straightforward way to produce the performance statistics.

My goal for now is to output a single source with the annotations of both models. I know I can simply do show-source: 1 in the [tiled-display] branch or something for that, but wouldn’t that mean I’m still running inference on 4 sources?

Yes, this modification is correct.

4, num-sources represents the number of times the source appears

deepstream-parallel-infer currently requires you to get the FPS by yourself. If you want to test the highest performance, you can use fakesink

Regarding this issue, you can refer to this configuration file

tritonclient/sample/configs/metamux/config_metamux0.txt

metamux is responsible for merging meta data from different branches together

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.