I’m currently on latest deepstream version.
There are issues when the decoder maxes out at 100% for more than 15 streams at streams close to 4k (2304X1296).
Is there a way to reduce the decoder load somehow, maybe by setting the capabilities property for the nvv4l2decoder to 1920X1080, if yes can you share an example?
On inspecting through gst-inspect found that there are width and height properties available, can this be modified?
Setting streammux to 1920*1080 didn’t exactly help in my case. Any suggestions?
Please provide complete information for your platform.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
According to your pipeline, the decoder only works to decode these encoded streams from IP cameras. There is no use to change other parts of the pipeline. You may need to use cameras which has smaller resolution streams.
@Fiona.Chen yup got what you were trying to tell but is there a way to address it from the device’s end?
Also we’ve noticed issues with decoder regardless of the process being run or not, the following post is something which even I’m facing. The only change being not having the patch on my machine.
Your decoder in GPU0 has been used 100%. So 4k resolution source is overloded. The decoder is ocuppied by the camera, it is no use to do anything with nvstreammux.
Also we’ve noticed issues with decoder regardless of the process being run or not, the following post is something which even I’m facing.
On the screenshot attached can you please check them. There were no processes running on GPU 0. But yet the decoder failed to get back to 0 or anything below 100. This was observed on both GPUs. Any idea as to how to see what is blocking the decoder?
My only problem is why does the decoder behave buggy on certain scenarios. I don’t think reboot should be the only way to fix it. Everytime something of this sort happens the remote machine needs to be rebooted. Can this be fixed without having to reboot?
I think we facing similar case on GPU.
On my guess, throttling are throwable that issue. Because of such packets will facing crash, lost or other troubles with process.
I wondering NVIDIA are figure out this issue. cuz this issue are looks like able outcome in many clients.
@Fiona.Chen we’re not sure how to replicate the issue, so far it’s happened only on the rtx cards, as to how to fix the problem are there any commands that can be issued to the gpu to forcefully free up the decoder. Anyway other than nvidia-smi dmon to check why the spike occurs and never frees up. It has been a pain point ever since.