DeepStream deepstream with rtsp stream, 3s latency

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**NX
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
FPS=25 Nvinfer interval=9 infer time: 200-250ms
1:When I walked into the camera’s field of view, I saw the corresponding result(by NvDsBatchMeta ) after three seconds ?
2:After a while,When I leave the camera’s field of view, the person is detected to leave after 25 seconds?
How can I fix these two problems ?

Hi,

If inference takes 250ms the pipeline is running at 4fps and you are using a 25fps stream as input. It is expected that GStreamer queues will accumulate buffers and this will be reflected as incrementing latency until the queues reach max capacity and start dropping buffers.

If latency is more important you can explicitly reduce the size of the queues to start dropping buffers faster queue leaky=2 max-size-buffers=1 and use synk=false in your sink (this is easier to do in a GStreamer gst-launch pipeline).

Another more advanced solution that we have used is to implement an element that transfer DeepStream metadata to another stream. This is not perfect because the metadata doesn’t correspond exactly to the frame you are seeing but in the end, you reduce the latency without compromising the framerate:

3 Likes

pilepine : decodebin rtph264depay h264parse capsfilter nvv4l2doceder nvstreammux nvinfer tracker nvosd httpsink(async send so its fine).

@ miguel.taylor
Thanks for sharing.

@RayZhang
You can set interval in nvinfer to skip buffer inferencing, which will ease the GPU loading and increase pipeline performance. and set sync to false in sink. set live-source to true in nvstreammux
you could check this troubleshoot,
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_troubleshooting.html#
Part The DeepStream application is running slowly.

It still increment latency. nvstreammux propety buffer-pool-size mean what?

You can use sudo tegrastats to check system stats, if GPU full, you may need to increase interval in nvinfer.

Hi,
what about streammux batch push timeout set in config? batched-push-timeout
Did you set sync in sink to 0?
and is ds reference app being used?

Hey
I am trying to follow this solution and I would like to inquire about the file where you are changing all the required parameters. I am working on deep stream-occupancy-analytics peoplenet application

Hi janet,

Please help to open a new topic. Thanks