SGIE not inference on every PGIE object with multi cameras

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only): 535.183.01
• Issue Type( questions, new requirements, bugs): questions or potentially a bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): deepstream-parallel-infer-app cpp version; 36 Cameras; 4 PGIE; 2 SGIE; streammux set to 1920 x 1080; all sources from local RTSP
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description): using yolo plugin

Hi all,
We are working with Deepstream parallel inference app to perform object detection on multiple RTSP streams (36 streams). We use 4 PGIE and 2 SGIE; which 1 PGIE connect to 2 SGIE. Let me explain it more detail on those part.

1 PGIE has 3 classes that only 1 class being passing to 2 SGIE. My SGIE has minimum Width and Height input. We try to test by comparing 1 RTSP stream with 36 RTSP stream. We get this statistics by monitoring 1 sources that is an RTSP from a file that we broadcast; the file has 15 minutes duration.

On a Single Source RTSP

  • Frame processed: 4534
  • 4764 PGIE Objects Detected; with maximum object in a batch = 8
  • PGIE Objects that passing min W & H: 2119 Objects
  • PGIE Objects that has no SGIE object: 252 Objects (11%)

On Multiple Sources RTSP

  • Frame processed: 4561
  • 4292 PGIE Objects Detected; with maximum object in a batch = 12
    Maximum Person in a batch = 12
  • PGIE Objects that passing min W & H: 1905 Objects
  • PGIE Objects that has no SGIE object: 1595 Objects (83%)

We already try this multiple times and those are our final statistics, its not just because of luck or random events. It is valid that using multiple sources will chunk down our detection rate.
Our batch size for PGIE set to 36 while the SGIE set to 32 (we also test using 64 but no luck, still gave the same amount).

We already look for another issue like:

We want to make sure that the detection rate of SGIE for multiple RTSP can match the single RTSP sources.
Any links to Deepstream documentation or references related to my question would be greatly appreciated.

Thanks in advance.

  1. Could you attach the pipeline graph by referring to our FAQ.
  2. What do you mean PGIE Objects that has no SGIE object?
  3. From your description, this problem should have nothing to do with deepstream-parallel-infer-app and the source format. Can you use our simpler demo like deepstream-test2 and a local file source to reproduce the problem?

Hi @yuweiw thanks for replying!

  1. Here I attach the pipeline graph.
    deepstream_graph.zip (14.6 MB)

  2. I will make it more simple, PGIE Objects is Person, and SGIE Object is PPE.

  • Based on our test by comparing 1 RTSP running and 36 RTSP running using same 15 minutes file broadcasted as RTSP; give us this conclusion: 1 RTSP give us more PPE Detection compared to 36 RTSP.
  • So that PGIE Objects that has no SGIE object means person that has no PPE. When if we compare the 36 RTSP to 1 RTSP for monitored RTSP/source, the PPE detection rate is higher in 1 RTSP.
  1. From our test, local file source by comparing 1 File and 36 Files, not giving us those issue, only RTSP.

Here I also attach the configuration of yml configuration for PGIE and SGIE.
config-yml.zip (4.6 KB)

Thank you very much.

The weird thing is that there’s no problem with local video. In theory, the video source doesn’t affect the inference.
The resolution of the graph you attached is too high. We can’t open it properly.
We have a simpler demo similar to your example deepstream_lpr_app. PGIE: car SGIE: the licence plate
Could you try to use this app to reproduce your issue? If we can reproduce this problem on our side, we can analyze and solve it faster.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.