• Hardware Platform (Jetson / GPU) : NVIDIA GeForce 1080ti, CUDA: 12.0 • DeepStream Version : 6.2.0 • TensorRT Version : 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) : 525.85.12
The graph is shown below, which directly connects to the nvidia video renderer using single source input and sets the sync and qos properties of the nvidia video renderer to false. The rtsp device and the host are connected through a router, and both are plugged into the same router using a network cable.
Maybe you can try to tune the RTSP parameters to improve the quality.
Does your RTSP server support TCP protocol? If so, you can set " select-rtp-protocol to 4
And the " latency" can be a larger value. To enlarge " udp-buffer-size" may also help.
Hi, I tried this like you said, select-rtp-protocol=4, latency=100, udp-buffer-size = udp-buffer-size * 3 / *4… But it did not work. This rtsp video read with opencv is smooth.
Hi, that situation still exists. how to fix it? This is the log: "[595,585ms] [Error] [omni.kit.app._impl] [py stderr]: WARNING from element NVidia Video Renderer/NVidia Video Renderer23-sink: A lot of buffers are being dropped.
WARNING from element NVidia Video Renderer/NVidia Video Renderer23-sink: A lot of buffers are being dropped.2023-05-24 08:30:34 [595,585ms] [Error] [omni.kit.app._impl] [py stderr]: "
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Yes. It is strange that the hardware decoder is not enabled.