Hi I’m trying to benchmark both approach using ffmpeg decoding via h264_cuvid and using a gstreamer pipeline using nvv4l2decoder.
Using ffmpeg I’m getting frames with really low delay between camera and presentation.
On the other side with deepstream sdk and it’s decoder I am getting better performance but worse delay time
This is my gstreamer pipeline that opens in a openCv VideoCapture.
rtspsrc location=“rtsp://10.50.84.110/defaultPrimary?streamType=u” user-id=admin user-pw=admin ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw , format=RGBA, width=800,height=600 ! videoconvert ! video/x-raw, format=BGR ! appsink
Is there any parameter to add to the decoder to improbe the delay time??