Nvstreammux consumes 100% CPU when drop-pipeline-eos=true and an EOS from source was processed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU (NVIDIA GeForce RTX 3060 Laptop GPU)
• DeepStream Version
6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
535.104.05
• Issue Type( questions, new requirements, bugs)
Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Run a pipeline with drop-pipeline-eos=true for nvstreammux:

docker run --name test \
    --gpus all \
    --entrypoint gst-launch-1.0 \
    nvcr.io/nvidia/deepstream:6.4-samples-multiarch \
    nvstreammux name=muxer width=1280 height=720 batch-size=4 drop-pipeline-eos=true ! \
    fakesink sync=false enable-last-sample=false qos=false \
    videotestsrc num-buffers=300 ! \
    'video/x-raw,width=1280,height=720,framerate=30/1' ! \
    identity sync=true ! \
    nvvideoconvert ! \
    'video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720' ! \
    muxer.sink_0

This pipeline sends 300 frames at a framerate 30/1.

After processing all frames nvstreamux logs message:

nvstreammux: Successfully handled EOS for source_id=0

and then starts consuming 100% CPU:

$ docker stats
CONTAINER ID   NAME      CPU %     MEM USAGE / LIMIT     MEM %     NET I/O       BLOCK I/O    PIDS
67750b1786aa   test      100.76%   122.3MiB / 15.04GiB   0.79%     7.69kB / 0B   0B / 659kB   8

If you set environment GST_DEBUG=9 you can see that nvstreammux constantly aquires and releases a buffer from pool:

0:00:04.030811619     1 0x561eff7d4580 DEBUG             GST_BUFFER gstbuffer.c:1470:gst_buffer_is_memory_range_writable: idx 0, length -1
0:00:04.030814133     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstobject.c:264:gst_object_unref:<nvstreammuxbufferpool0> 0x561f000d7630 unref 2->1
0:00:04.030816939     1 0x561eff7d4580 LOG               bufferpool gstbufferpool.c:1134:default_acquire_buffer:<nvstreammuxbufferpool0> acquired buffer 0x561f000d77e0
0:00:04.030819353     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstobject.c:237:gst_object_ref:<nvstreammuxbufferpool0> 0x561f000d7630 ref 1->2
0:00:04.030827609     1 0x561eff7d4580 DEBUG             GST_BUFFER gstbuffer.c:2316:gst_buffer_add_meta: alloc metadata 0x7f56f402e310 (NvDsMeta) of size 72
0:00:04.030830845     1 0x561eff7d4580 LOG               GST_BUFFER gstbuffer.c:1837:gst_buffer_map_range: buffer 0x561f000d77e0, idx 0, length -1, flags 0001
0:00:04.030833119     1 0x561eff7d4580 LOG               GST_BUFFER gstbuffer.c:305:_get_merged_memory: buffer 0x561f000d77e0, idx 0, length 1
0:00:04.030835644     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstminiobject.c:478:gst_mini_object_ref: 0x561eff7e6610 ref 1->2
0:00:04.030837958     1 0x561eff7d4580 TRACE            GST_LOCKING gstminiobject.c:233:gst_mini_object_lock: lock 0x561eff7e6610: state 00010000, access_mode 1
0:00:04.030841344     1 0x561eff7d4580 TRACE            GST_LOCKING gstminiobject.c:293:gst_mini_object_unlock: unlock 0x561eff7e6610: state 00010101, access_mode 1
0:00:04.030845973     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstminiobject.c:660:gst_mini_object_unref: 0x561eff7e6610 unref 2->1
0:00:04.030848187     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstminiobject.c:660:gst_mini_object_unref: 0x561f000d77e0 unref 1->0
0:00:04.030850672     1 0x561eff7d4580 TRACE        GST_REFCOUNTING gstminiobject.c:478:gst_mini_object_ref: 0x561f000d77e0 ref 0->1
0:00:04.030852766     1 0x561eff7d4580 LOG               GST_BUFFER gstbuffer.c:765:_gst_buffer_dispose: release 0x561f000d77e0 to pool 0x561f000d7630
0:00:04.030855761     1 0x561eff7d4580 LOG               GST_BUFFER gstbuffer.c:1694:gst_buffer_resize_range: trim 0x561f000d77e0 0-64 size:64 offs:0 max:64
0:00:04.030857925     1 0x561eff7d4580 DEBUG             GST_BUFFER gstbuffer.c:2514:gst_buffer_foreach_meta: remove metadata 0x7f56f402e310 (NvDsMeta)
0:00:04.030863385     1 0x561eff7d4580 LOG               bufferpool gstbufferpool.c:1307:default_release_buffer:<nvstreammuxbufferpool0> released buffer 0x561f000d77e0 0
0:00:04.030865940     1 0x561eff7d4580 DEBUG             GST_BUFFER gstbuffer.c:1470:gst_buffer_is_memory_range_writable: idx 0, length -1

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Thanks for the sharing! I can reproduce this issue on DGPU with DS6.4. we are investigating!

can you use new nvstreammux instead? the following command-line has no this high CPU usage issue. BTW, the new nvstreammux is opensource.

 export USE_NEW_NVSTREAMMUX=yes && gst-launch-1.0  -v  videotestsrc num-buffers=30 !     'video/x-raw,width=1280,height=720,framerate=30/1' !     identity sync=true !  \
 nvvideoconvert !     'video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720' !     muxer.sink_0   nvstreammux name=muxer  batch-size=4 drop-pipeline-eos=true ! \
 fakesink sync=false enable-last-sample=false qos=false

Hello, unfortunately, the new nvstreammux lacks the required features like image padding and casting to a single resolution, which our framework Savant uses. In the future, we will switch to the new implementation, but it is worth making it work because plenty of customers still use JP 5.0.2 and JP 5.1.2GA and have PCB hardware vendor locks preventing them from upgrading to newer DS. And there is also a Xavier family.

But, it is good to know that in the new nvstreammux the problem does not exist.

@kudryavtsev_ia @tomskih_pa please refer to this link for scaling to a single resolution.

Thank you for the link; it can help.

However, It does not change the fact that there are DS 6.3 and previous versions (where the bug still exists); and many people still use JP 5.x and are not willing or cannot upgrade to JP6.0 because of Xavier’s limitations.

I mean, it is often not an option to switch to a new nvstreammux because of other limitations.

About the issue “after EOS, Nvstreammux consumes 100% CPU when drop-pipeline-eos is true.” We have fixed the problem internally. Please wait for the latter release.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.