Create batch of frames for a single file stream

Hi, I’m using the example test3 provided with the deepstream sdk. I’m providing my onnx in the deepstream custom config TXT file. The app is creating batches when I provide multiple streams but isn’t doing so for a single stream. I had added an argument for batch under [properties] in the TXT file.
Please suggest how to go give multiple frames as input to the model for a single video file. Thanks.
link to onnx and config file: deepstream-share-midas - Google Drive
• Hardware Platform (Jetson )
• DeepStream Version 5.0
• JetPack Version 4.4.1
• TensorRT Version 7.1.3
• Issue Type questions

./deepstream-test3-app [uri1] [uri2] … [uriN]
uri can be same stream.

@Amycao thanks for the reply.
The method you suggested would process the same frames n number of times(n = batch size) and would thus keep the processing time of a video same or even increase. I want to decrease that time by using batching.
Please suggest a way to do so.

EDIT: I’m getting a live video stream. I need to increase the inference speed and I chose batching for that(and adding latency). If there is a better way to achieve higher speed than the current ~6fps that I get then do suggest.

It’s batching processing, not process the same frames n times. the numbers of the streams set to streammux batch size and nvinfer batch size.
check this about application running slowly.
Troubleshooting — DeepStream 6.1.1 Release documentation The DeepStream application is running slowly

it’s batching yes, but:
suppose we have n streams: s1,s2,…,sn.
each has m frames: si1, si2,…,sim for stream si.

let’s say we provide 3 streams s1,s2,s3. so the batch is 3.
now, it create batch of [s11,s21,s31], [s12,s22,s32], [s13,s23,s33] and so on.
for the same stream it will become: [s11,s11,s11], [s12,s12,s12], [s13,s13,s13]

so yes, batching is happening, but it’s not useful. the same frame would be processed batch number of times.

Sample output on running test3 that is seen on the screen.
/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# deepstream-test3-app file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_qHD.h264 file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_qHD.h264 file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_qHD.h264 file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_qHD.h264
As you can see, the same frame number has been processed multiple times(one stream is loaded before the others so a difference in frame number but you get the gist):

Frame Number = 81 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 81 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 81 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 94 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 82 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 82 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 82 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 83 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 95 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 83 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 83 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 96 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 84 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 84 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 84 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 97 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 85 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 85 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 85 Number of objects = 0 Vehicle Count = 0 Person Count = 0

If you think the inferencing is the bottleneck, you can skip some frames’ inferencing by setting “interval” property of nvinfer to improve the pipeline performance.

If you think to batch the frames in period can help, the nvmultistreamtiler and nvstreamdemux can not be used, you should write a new app to implement the following pipeline:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! nvv4l2decoder num-extra-surfaces=5 ! m.sink_0 nvstreammux name=m batch-size=4 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

The value of “num-extra-surfaces” for nvv4l2decoder should be larger than “batch-size” of nvstreammux. This method needs lots of extra memory for running and only supports single stream.

1 Like