Pipeline freezes when using nvv4l2decoder and enable-last-sample=1

Hello, we are encountering pipeline freezing (indefinitely) when processing some videos using nvv4l2decoder. Below is a description of the computer.

• Hardware Platform=GeForce RTX 3070
• DeepStream Version=6.0.1-1
• TensorRT Version=8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)=470.63.01
• Issue Type=question/bug

How to reproduce the issue ?

To demonstrate this issue, please consider the following gstreamer pipeline:

gst-launch-1.0 filesrc location=video_.mp4 ! parsebin ! nvv4l2decoder   ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1  ! fakesink

In most scenarios, we are able to read videos without any issue, however in some rare cases, the video can cause the pipeline to get stuck forever.

We can demonstrate this this behavior in this part of such video: video_.mp4 - Google Drive . This is just a part of the video, where the freeze occurs. Currently we are not able to provide the whole video file.

We suspect that this issue is linked to the nvv4l2decoder object, as if we use another decoder, we do not encounter this issue. This is an example of such pipeline:

gst-launch-1.0 filesrc location=video_.mp4 ! parsebin ! avdec_h265  ! nvvideoconvert  ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 nvbuf-memory-type=0  ! fakesink

We have also noticed, that if we modify the sink element with enable-last-sample=0, the freezing does not occur. However this is not a viable solution for our current deployment, as we are using a third-party sink element without this property.

Is there any other way of modifying our pipeline (e.g. adding some property to an element, or adding another element before the sink) to mitigate the issue? Currently, we are not able to use nvv4l2decoder reliably, given the possibility of the pipeline getting completely stuck.

1 Like