Decoding 4k streams causing issues on decoder?

I’m currently on latest deepstream version.
There are issues when the decoder maxes out at 100% for more than 15 streams at streams close to 4k (2304X1296).
Is there a way to reduce the decoder load somehow, maybe by setting the capabilities property for the nvv4l2decoder to 1920X1080, if yes can you share an example?

On inspecting through gst-inspect found that there are width and height properties available, can this be modified?

Setting streammux to 1920*1080 didn’t exactly help in my case. Any suggestions?

Please provide complete information for your platform.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hardware Platform: DGPU rtx 3070
Deepstream Version: 5.1.0
TensorRT Version: 7.2.2
Cuda Version: 11.2
Driver: 460.32.03

@Fiona.Chen any update?

What is your camera type? CSI camera, USB camera or IP camera?

Have you measured the GPU performance while running the 4k case?

You can use the command “nvidia-smi dmon” to monitor the GPU performance.

They are IP cameras.
decoder is around 75-80%.

According to your pipeline, the decoder only works to decode these encoded streams from IP cameras. There is no use to change other parts of the pipeline. You may need to use cameras which has smaller resolution streams.

@Fiona.Chen yup got what you were trying to tell but is there a way to address it from the device’s end?

Also we’ve noticed issues with decoder regardless of the process being run or not, the following post is something which even I’m facing. The only change being not having the patch on my machine.

Here’s a snapshot of the issue,

Any help would be appreciated thanks.

Your decoder in GPU0 has been used 100%. So 4k resolution source is overloded. The decoder is ocuppied by the camera, it is no use to do anything with nvstreammux.

Please use other cameras with smaller resolution.

Also we’ve noticed issues with decoder regardless of the process being run or not, the following post is something which even I’m facing.

On the screenshot attached can you please check them. There were no processes running on GPU 0. But yet the decoder failed to get back to 0 or anything below 100. This was observed on both GPUs. Any idea as to how to see what is blocking the decoder?

Do you mean when there is no process running, the decoder is occupied 100%?

Yes there are instances where I’ve seen it. Above is a screenshot of it. Only a reboot seems to get it fixed

So what is the problem now?

My only problem is why does the decoder behave buggy on certain scenarios. I don’t think reboot should be the only way to fix it. Everytime something of this sort happens the remote machine needs to be rebooted. Can this be fixed without having to reboot?

We don’t know what is wrong. How can we fix the problem? Is there any clue of the root cause?

Hello, I am a tagged poster’s writer.

I think we facing similar case on GPU.
On my guess, throttling are throwable that issue. Because of such packets will facing crash, lost or other troubles with process.

I wondering NVIDIA are figure out this issue. cuz this issue are looks like able outcome in many clients.

@Fiona.Chen we’re not sure how to replicate the issue, so far it’s happened only on the rtx cards, as to how to fix the problem are there any commands that can be issued to the gpu to forcefully free up the decoder. Anyway other than nvidia-smi dmon to check why the spike occurs and never frees up. It has been a pain point ever since.

Have you tried the same case with other cards? There is no special tool for codec only.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.