Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**NX • DeepStream Version5.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
FPS=25 Nvinfer interval=9 infer time: 200-250ms
1:When I walked into the camera’s field of view, I saw the corresponding result(by NvDsBatchMeta ) after three seconds ?
2:After a while,When I leave the camera’s field of view, the person is detected to leave after 25 seconds?
How can I fix these two problems ?
If inference takes 250ms the pipeline is running at 4fps and you are using a 25fps stream as input. It is expected that GStreamer queues will accumulate buffers and this will be reflected as incrementing latency until the queues reach max capacity and start dropping buffers.
If latency is more important you can explicitly reduce the size of the queues to start dropping buffers faster queue leaky=2 max-size-buffers=1 and use synk=false in your sink (this is easier to do in a GStreamer gst-launch pipeline).
Another more advanced solution that we have used is to implement an element that transfer DeepStream metadata to another stream. This is not perfect because the metadata doesn’t correspond exactly to the frame you are seeing but in the end, you reduce the latency without compromising the framerate:
Hey
I am trying to follow this solution and I would like to inquire about the file where you are changing all the required parameters. I am working on deep stream-occupancy-analytics peoplenet application