How to accelerate single stream pipeline with batch size grater then 1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Xavier
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• Issue Type( questions, new requirements, bugs)

I’m new to DS and I wondering if in possible to accelerate my pipeline for instance segmentation with single input stream.

For example if choose in
Is there posibility to use some kind of query before nvinfer plugin that will wait for 4 frames and after that will infer on batch 4, and will output boxes and masks for batch to another plugin in pipeline?
I understand that there will be latency, but it should not be great let’s say we have 24 frames per second, than 4/24=0,16 seconds, if we choose batch size of 2 this latency will be even smaller approx. 0,08 s.

what do you want to query? here is the batch-size setting method, Frequently Asked Questions — DeepStream 6.1.1 Release documentation

Ok, may be my explanation was not explicit.
Here is the quote from your link:

We recommend that the nvstreammux’s batch-size be set to either number of sources linked to it or the primary nvinfer’s batch-size.

I have only one source and if I put in [property] batch-size=1 and set batch-size of streammux plugin to 1 (I’m using python bindings to build pipeline):

streammux.set_property('batch-size', 1)

I get FPS approx 13 fps,

I want to accelerate pipeline and put batch size 4 both in nvinfer and streammux and get the same the same FPS approx 13. I expect to increase fps.

@fanzh So the fps do not increase as expected. Can ds work with batches greater than 1 for one source to increase
For calculating fps I use standart class from examples (deepstream_python_apps/ at v1.1.0 · NVIDIA-AI-IOT/deepstream_python_apps · GitHub)

no, batch-size should be same with source number, what is your source 's fps? if you developed your own DS C++ code, you could also refer to DeepStream SDK FAQ - #12 by bcao 4 to measure the latency of the pipeline comonents. please refer to this topic The deepstream-test3 demo using rtsp webcam delayed

@fanzh My source fps is 24, I’m developing with Python Bindings.
So my question was if there is some solution to work with one source and batch size greater than 1, some kind of query after source to wait for number of frames equal to batch size and after that to output to streammux → nvinfer

The muxer uses a round-robin algorithm to collect frames from the sources, one batch will get only one frame from each source even the batch-size is greater than 1.
which sample are you testing? did you use custom model? please use to check if fps is ok.

here are more details about nvstreammux Gst-nvstreammux — DeepStream 6.1.1 Release documentation.

Thanks? I’ve read this.

Is it possible to take one source let’s say rtsp stream make from it several sources and input it to nvstreammux, to infer on one stream with batch greater than 1?

do you mean many of the same sources will feed into nvstreammux? in that case, the nvstreammux’s batch-size should be the number of sources linked to it.

I mean if there is any way to divide one source to several and after than to feed into nvstrreammux

yes , you can use tee to divide one source to several, or use streamer server to output several rtsp source.

Thanks for your response as I’ve read in documentation of gstreamer (tee) the tee duplicates the frame, the same does streamer server but I need that one frame to go to one branch the next frame to another so I can pass it to nvstreammux to make batch with different frames.
Let’s say I have the next pipeline:
Decodebin → nvstreammux → nvinfer
May be I can make a probe after Decodebin and make batch by myself, change metadata for batch after that do not use nvstreammux and directly connect to nvinfer?

please link nvstreammux and nvinfer, please refer to deepstream samples deepstream-test1

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.