SM usage increases with nvv4l2decoder's drop-frame-interval set

Hi:
I use T4 to decode 30 rtmp streams, while the cpu usage increased to 100% after about 30 minutes. Then I found someone had already booked a issue and there is a patch.
Here is the issue link: Jetson Nano shows 100% CPU Usage after 30 minutes with Deepstream-app demo - #18 by vincent.mcgarry

I use the following pipeline to test

    gst-launch-1.0 -e \
    rtmpsrc location=$location ! \
    typefind ! \
    flvdemux ! \
    h264parse ! \
    nvv4l2decoder drop-frame-interval=24 ! \
    nvvideoconvert ! video/x-raw,format=NV12 ! \
    videoconvert ! video/x-raw,format=BGRx ! \
    perf ! \
    fakesink sync=true

After I added the patch, the SM usage increased.  

The `nvidia-smi dmon` result before adding the patch:

image

The `nvidia-smi dmon` result after adding the patch:

image

The increase of the SM usage depends on the resolution of the video. when I decode smaller videos, the SM usage increases less.

Thanks a lot in advance for any idea/advice!

Hi,
Setting fakesink sync=true goes into continuous decoding and may trigger high CPU usage. Can you please try

gst-launch-1.0 -e \
rtmpsrc location=$location ! \
typefind ! \
flvdemux ! \
h264parse ! \
nvv4l2decoder drop-frame-interval=24 ! \
nvvideoconvert ! video/x-raw,format=NV12 ! \
videoconvert ! video/x-raw,format=BGRx ! \
perf ! \
fpsdisplaysink text-overlay=0 video-sink=fakesink -v

Thank @mchi to provide suggestion on this.

Hi @yannian89 feel free to create a new topic if you have further queries about DeepStream SDK. Thanks.

1 Like

Hi DaneLLL

We have find the reason with the help of @mchi.

We fork 30 processes, which create 30 cuda context and consume too much memory. When the memory usage of GPU is too high, the SM usage increase. So the issue can closed.

Setting fakesink sync=true goes into continuous decoding and may trigger high CPU usage
Nian: I don’t think so, with the sync property set, the pipeline will sync to the clock. We can see the fps in the perf element’s log.

Thanks @DaneLLL and @mchi