Deepstream Version 7.0
Hi I was dabbling with the deepstream_parallel_inference_app while referring to this link: deepstream_parallel_inference_app/README.md at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub
More specifically, I was trying out the bodypose_yolo_win1 example: deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo_win1/source4_1080p_dec_parallel_infer.yml at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub
I wanted to show all 4 sources instead, so I commented out show source: 2
in line 48 under [tiled-display]
.
I also wanted inference to be performed on all 4 sources, so under [branch0]
and [branch1]
, I changed src-ids:
to src-ids: 0;1;2;3
for both (lines 209 and 246.
However, this is what I got for my output:
It seems that the object detection model is not inferring on sources 2 and 3, while the bodypose is not inferring on sources 0 and 3.
Regarding this, I have a few inquiries:
-
Am I changing the correct configs? I’m under the impression that
src-ids
under the[branch]
group is what I’m supposed to change. -
Is this considered 1 or 4 sources? deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo_win1/sources_4.csv at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app · GitHub
-
I noticed that this sample app does not send print information regarding FPS in the terminal. I referred to the link below to print out the statistics, but it seems like the FPS is capped at 30 (source FPS), although I can’t be completely sure. I was wondering whether there is a more straightforward way to produce the performance statistics.
My goal for now is to output a single source with the annotations of both models. I know I can simply do show-source: 1
in the [tiled-display]
branch or something for that, but wouldn’t that mean I’m still running inference on 4 sources?