NVStreammux Settings to Guarantee Synchronization and Full Batches

I am looking for help configuring the streammux element. Given a batch size equal to the number of streammux sink pads and a guarantee that the streammux will be presented frames on each of its sink pads with the exact same PTS more or less instantaneously (though with the potential for jitter in when the streammux is presented with those frames), what settings should I use in order to guarantee that a full batch is sent out each time those frames are presented to the streammux?

For some more color on this problem, I am streaming panoramic video into an nvurisrcbin. The model that I am using operates on a more typical aspect ratio, so I am cropping into two RoIs on the same image and trying to send them through an nvinfer element in the same batch. Here is a simplified diagram of my pipeline

                +-> nvvideoconvert -+
                |                   |
                |                   v
nvurisrcbin -> tee              streammux -> nvinfer -> fakesink
                |                   ^
                |                   |
                +-> nvvideoconvert -+

I am having trouble coming up with the proper settings that will guarantee that the two cropped regions from the same frame wind up in the same batch. I am finding that the streammux will occasionally push through a batch of size one and then wind up out-of-sync.

I would think that specifying sync-inputs=1 and raising batched-push-timeout to the max value would help address this, but the problem persists. I have also tried adding overall-min-fps-n=1 and overall-min-fps-d=1000 to the streammux configuration file, but again, the muxing still falls out of sync.

I have also confirmed with pad probes on the videoconvert and streammux src pads that the source frames are arriving more-or-less synchronously:

1752019527.1466298: src_0 pushing frame with pts 1874484747
1752019527.1467605: src_1 pushing frame with pts 1874484747
1752019527.1554282: Batch PTSs: [1874484747, 1874484747]
1752019531.5618634: src_0 pushing frame with pts 1875297564
1752019531.5619516: src_1 pushing frame with pts 1875297564
1752019531.5620193: Batch PTSs: [1875297564]
1752019531.562435: src_0 pushing frame with pts 2062815057
1752019531.562484: Batch PTSs: [2062815057, 1875297564]
1752019531.5625913: src_1 pushing frame with pts 2062815057

• Hardware Platform (Jetson / GPU)

GPU

• DeepStream Version

7.0

• JetPack Version (valid for Jetson only)
• TensorRT Version

8.6.1.6-1+cuda12.0

• NVIDIA GPU Driver Version (valid for GPU only)

535.230.02

• Issue Type( questions, new requirements, bugs)

Question

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

If your stream is a live stream, please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Thanks for the response Fiona. Unfortunately, the linked article did not address our issues. I’m hoping that this experience can help me grow as a deepstream power user and gain stronger mental models about how the configuration of the streammux affects its behaviors. The documentation has proved an insufficient tool in this regard

The question is – how can I customize a streammux element to guarantee full batches when

  • the number of sources of the streammux matches the intended batch size
  • there is a semantic guarantee from the pipeline that a buffer will be ready on all sources at the same time
  • there is no guarantee of regular timing for the buffers on the sources (there may be large delays between buffers, buffers may arrive with out-of-order PTSs)

Thanks for your help!

According to your original post, your pipeline has only one live stream and you convert the live stream into multiple inputs by “tee” and “nvvideoconvert”. The “tee” and “nvvideoconvert” do not change the video frame timestamps, from “nvstreammux” point, the frames arrived to nvstreammux sink pads are all with the same timestamp, they can be batched into the full batch. The nvstreammux configurations should be proper to guarantee the batch is generated in the right way. Please follow the instruction DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

As to your description “there may be large delays between buffers, buffers may arrive with out-of-order PTSs”, I don’t understand what your “buffers” mean. If you are talking about the buffers which will be fed to nvstreammux and be batched by the nvstreammux, they are buffers with the same PTS because they come from the same frame from the original source. “tee” and “nvvideoconvert” will not drop any frame or change the timestamp of the frame.

If you are talking about the frames in the original source, if it is a live source, it is possible that “there may be large delays between buffers, buffers may arrive with out-of-order PTSs”. But it has nothing to do with the nvstreammux and batch.

The nvstreammux document also introduce the working mechanism of generating batches, you can refer to it first. Gst-nvstreammux — DeepStream documentation

**There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks**

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.