Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Xavier • DeepStream Version
6.0.1 • JetPack Version (valid for Jetson only)
4.6.2 • TensorRT Version
8.2.1 • Issue Type( questions, new requirements, bugs)
I’m new to DS and I wondering if in possible to accelerate my pipeline for instance segmentation with single input stream.
For example if choose in
Is there posibility to use some kind of query before nvinfer plugin that will wait for 4 frames and after that will infer on batch 4, and will output boxes and masks for batch to another plugin in pipeline?
I understand that there will be latency, but it should not be great let’s say we have 24 frames per second, than 4/24=0,16 seconds, if we choose batch size of 2 this latency will be even smaller approx. 0,08 s.
@fanzh My source fps is 24, I’m developing with Python Bindings.
So my question was if there is some solution to work with one source and batch size greater than 1, some kind of query after source to wait for number of frames equal to batch size and after that to output to streammux → nvinfer
The muxer uses a round-robin algorithm to collect frames from the sources, one batch will get only one frame from each source even the batch-size is greater than 1.
which sample are you testing? did you use custom model? please use deepstream_test_1.py to check if fps is ok.
Thanks for your response as I’ve read in documentation of gstreamer (tee) the tee duplicates the frame, the same does streamer server but I need that one frame to go to one branch the next frame to another so I can pass it to nvstreammux to make batch with different frames.
Let’s say I have the next pipeline:
Decodebin → nvstreammux → nvinfer
May be I can make a probe after Decodebin and make batch by myself, change metadata for batch after that do not use nvstreammux and directly connect to nvinfer?