• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090 • DeepStream Version : 6.3 • JetPack Version (valid for Jetson only) • TensorRT Version : 12.2 • NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05 • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am using deepstream-test3.py and added tracker and other models
I have 32 video files when run pipeline with these video files get objects count from 400 to 600 but when use tool to simulate these video to RTSP and pipeline we get objects count from 100 to 250
also we don’t set parameter batched-push-timeout
why these drop happened however used same videos
There may be recoding or packet loss when you use the RTSP.
1.You need to make sure that the video is not recoded when you use the rtsp.
2.You need to make sure there are no packet loss when you use rtsp.
It cound lead to the two problems I mentioned above with the method you used.
1.Vlc may decode the stream first, then re-encode it, and then send it though the rtsp.
2. DeepStream is based on Gstreamer. The rtsp module in gstreamer may lose packets while transmitting the video data.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Let’s narrow down that first.
1.Could you try to use the tcp protocol as lower transport protocols of the rtsp?
2.Could You use a h264 stream with just 1 frame for comparison?