Gpu-id property is set based on SRC caps

Please provide complete information as applicable to your setup.

• Hardware Platform: GPU)
• DeepStream Version: 6.3

• TensorRT Version 8.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.104.05

I have a pipeline:

updsrc ! application/x-rtp,encoding-name=H26X ! rtph26xdepay! h26xparse ! queue ! nvv4l2decoder ! nvvideoconvert ! videorate ! video/x-raw(memory:NVMM), framerate=8/1" ! queue ! mux.sink_0 streammux ! !nvinfer ! queue ! tracker ! queue ! nvinfer ! queue ! nvinfer ! queue ! fakesink

Wen I run with new streammux export USE_NEW_NVSTREAMMUX=yes this warning spams the console:

0:03:36.191837381 4429 0x7e968c118d20 WARN nvvideoconvert gstnvvideoconvert.c:1962:gst_nvvideoconvert_fixate_caps:<decoder_nvvidconv_17f493db-4d17-4c9f-b474-0a5af87d2376> gpu-id property is set based on SRC caps. Property config setting (if any) is overridden!!

If I run the same pipeline but with export USE_NEW_NVSTREAMMUX=no it runs without warning.

I need to use the new Streammux, How can I prevent these warnings from happening?

Have you configured different gpu-ids in elements? Or try the following properties

nvvideoconvert nvbuf-memory-type=2

I’ve used nvvideoconvert nvbuf-memory-type=2 and the warning is still there.

We have production devices with 1 or more GPUs, even though the gpu-id is assigned programmatically to ALL elements in the software, this happens even if the device has a single GPU.

My question is what in the caps do I need to set (filter)?
Why is there no implementation in the sample apps with the new-nvstreammux?

Is the new-nvstreammux part of Stable plugins?

Yes, it is stable.

For new streammux, gpu id/nvbuf-memory-type are unnecessary, you can refer to this

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreammux2.html#id3

If you want to disable the above printing, modify your pipeline like below.

Generally speaking, nvvideocovert is also unnecessary for new nvstreammux

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_0 ! queue ! mux.sink_0 \
nvstreammux name=mux batch-size=1 ! \
nvstreamdemux name=demux \
demux.src_0 ! fakesink sync=true

In fact, deepstrem-app/deepstream_infer_tensor_meta_test, etc., all support new streammux.

I see. But there is more to it, It appears as if the buffers are “auto” allocating memory and causing CUDA errors. As the behaviour when using NEW_NVSTREAMMUX is also printing:

gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size
gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size

Also sometimes it errors on CUDA level:

0:01:44.554063302    18 0x7f02d4004180 ERROR  nvinfer gstnvinfer.cpp:1209:get_converted_buffer:<detector> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:44.554179577    18 0x7f02d4004180 WARN                 nvinfer gstnvinfer.cpp:1560:gst_nvinfer_process_full_frame:<detector> error: Buffer conversion failed
ERROR: nvdsinfer_context_impl.cpp:1848 Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: nvdsinfer_context_impl.cpp:1681 Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:01:44.555175346    18 0x7f02d40040c0 WARN                 nvinfer gstnvinfer.cpp:2435:gst_nvinfer_output_loop:<detector> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:01:44.556277399    18      0x3d3b300 WARN                 nvinfer gstnvinfer.cpp:1404:gst_nvinfer_input_queue_loop:<reid> error: Failed to queue input batch for inferencing

0:01:44.565906953    18 0x7f02d40040c0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1884> [UID = 1]: Tried to release an outputBatchID which is already with the context

0:01:44.568965716    18 0x7efebc02c5e0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:44.569045160    18 0x7efebc02c5e0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:44.569058111    18 0x7efebc02c5e0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink> start failed

0:01:44.632091546    18 0x7efeb43bc1e0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:44.632835176    18 0x7efeb43bc1e0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:44.632850379    18 0x7efeb43bc1e0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink> start failed

0:01:45.551866572    18 0x7efeec0604c0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:45.551898260    18 0x7efeec0604c0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:45.551915921    18 0x7efeec0604c0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink> start failed

0:01:46.235460654    18 0x7eff14660300 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.235514308    18 0x7eff14660300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.235530324    18 0x7eff14660300 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink> start failed
0:01:46.248502406    18 0x7efef8149b60 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.248554437    18 0x7efef8149b60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.248565761    18 0x7efef8149b60 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink> start failed
0:01:46.253266687    18 0x7efef8149aa0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.254441193    18 0x7efef8149aa0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.254496383    18 0x7efef8149aa0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink> start failed

2024-02-21 03:18:02.208 ERROR    <_bus_call@inference_pipeline.py> [1] gst-resource-error-quark: Failed to allocate required memory. (13): gstv4l2videodec.c(2252): gst_v4l2_video_dec_handle_frame (): /GstPipeline:inference_pipeline_1/GstBin:decode_bin_f572db21-081b-4935-97b3-687120d6c6af/nvv4l2decoder:decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:

Again, all of this won’t happen using the legacy streammux.

I was relating the issues with switching the GPU allocation, but probably is something else, as you say there is no need for conversion.

Lastly, the convertor that we use on our production pipeline, will also rotate the stream (if needed) so I do need a convertor somewhere in the decoding pipeline.

Important to note that our production code runs with python.

Can you share sample code to reproduce the problem? I want to confirm if this is a bug.

From your error log, it seems that there is insufficient video memory.