Gpu-id property is set based on SRC caps

Please provide complete information as applicable to your setup.

• Hardware Platform: GPU)
• DeepStream Version: 6.3

• TensorRT Version 8.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.104.05

I have a pipeline:

updsrc ! application/x-rtp,encoding-name=H26X ! rtph26xdepay! h26xparse ! queue ! nvv4l2decoder ! nvvideoconvert ! videorate ! video/x-raw(memory:NVMM), framerate=8/1" ! queue ! mux.sink_0 streammux ! !nvinfer ! queue ! tracker ! queue ! nvinfer ! queue ! nvinfer ! queue ! fakesink

Wen I run with new streammux export USE_NEW_NVSTREAMMUX=yes this warning spams the console:

0:03:36.191837381 4429 0x7e968c118d20 WARN nvvideoconvert gstnvvideoconvert.c:1962:gst_nvvideoconvert_fixate_caps:<decoder_nvvidconv_17f493db-4d17-4c9f-b474-0a5af87d2376> gpu-id property is set based on SRC caps. Property config setting (if any) is overridden!!

If I run the same pipeline but with export USE_NEW_NVSTREAMMUX=no it runs without warning.

I need to use the new Streammux, How can I prevent these warnings from happening?

Have you configured different gpu-ids in elements? Or try the following properties

nvvideoconvert nvbuf-memory-type=2

I’ve used nvvideoconvert nvbuf-memory-type=2 and the warning is still there.

We have production devices with 1 or more GPUs, even though the gpu-id is assigned programmatically to ALL elements in the software, this happens even if the device has a single GPU.

My question is what in the caps do I need to set (filter)?
Why is there no implementation in the sample apps with the new-nvstreammux?

Is the new-nvstreammux part of Stable plugins?

Yes, it is stable.

For new streammux, gpu id/nvbuf-memory-type are unnecessary, you can refer to this

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreammux2.html#id3

If you want to disable the above printing, modify your pipeline like below.

Generally speaking, nvvideocovert is also unnecessary for new nvstreammux

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_0 ! queue ! mux.sink_0 \
nvstreammux name=mux batch-size=1 ! \
nvstreamdemux name=demux \
demux.src_0 ! fakesink sync=true

In fact, deepstrem-app/deepstream_infer_tensor_meta_test, etc., all support new streammux.

I see. But there is more to it, It appears as if the buffers are “auto” allocating memory and causing CUDA errors. As the behaviour when using NEW_NVSTREAMMUX is also printing:

gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size
gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size

Also sometimes it errors on CUDA level:

0:01:44.554063302    18 0x7f02d4004180 ERROR  nvinfer gstnvinfer.cpp:1209:get_converted_buffer:<detector> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:44.554179577    18 0x7f02d4004180 WARN                 nvinfer gstnvinfer.cpp:1560:gst_nvinfer_process_full_frame:<detector> error: Buffer conversion failed
ERROR: nvdsinfer_context_impl.cpp:1848 Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: nvdsinfer_context_impl.cpp:1681 Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:01:44.555175346    18 0x7f02d40040c0 WARN                 nvinfer gstnvinfer.cpp:2435:gst_nvinfer_output_loop:<detector> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:01:44.556277399    18      0x3d3b300 WARN                 nvinfer gstnvinfer.cpp:1404:gst_nvinfer_input_queue_loop:<reid> error: Failed to queue input batch for inferencing

0:01:44.565906953    18 0x7f02d40040c0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<detector> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1884> [UID = 1]: Tried to release an outputBatchID which is already with the context

0:01:44.568965716    18 0x7efebc02c5e0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:44.569045160    18 0x7efebc02c5e0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:44.569058111    18 0x7efebc02c5e0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:pool:sink> start failed

0:01:44.632091546    18 0x7efeb43bc1e0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:44.632835176    18 0x7efeb43bc1e0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:44.632850379    18 0x7efeb43bc1e0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_6d909893-5064-41a3-8a10-0a8c7b840d5b:pool:sink> start failed

0:01:45.551866572    18 0x7efeec0604c0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:45.551898260    18 0x7efeec0604c0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:45.551915921    18 0x7efeec0604c0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_9071a25a-f1b0-47cb-b597-2dee558f8c5d:pool:sink> start failed

0:01:46.235460654    18 0x7eff14660300 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.235514308    18 0x7eff14660300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.235530324    18 0x7eff14660300 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_620a95ef-48e2-4286-bbd2-7eb1c350db59:pool:sink> start failed
0:01:46.248502406    18 0x7efef8149b60 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.248554437    18 0x7efef8149b60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.248565761    18 0x7efef8149b60 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_f04e3406-f2f5-47e8-8895-93dbccc92081:pool:sink> start failed
0:01:46.253266687    18 0x7efef8149aa0 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink:allocator> error requesting 2 buffers: Unknown error -1
0:01:46.254441193    18 0x7efef8149aa0 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink> we received 0 buffer from device '/dev/nvidia0', we want at least 2
0:01:46.254496383    18 0x7efef8149aa0 ERROR             bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active:<decoder_decoder_fdf3ba50-98d6-42d9-a59c-666e5c1085a2:pool:sink> start failed

2024-02-21 03:18:02.208 ERROR    <_bus_call@inference_pipeline.py> [1] gst-resource-error-quark: Failed to allocate required memory. (13): gstv4l2videodec.c(2252): gst_v4l2_video_dec_handle_frame (): /GstPipeline:inference_pipeline_1/GstBin:decode_bin_f572db21-081b-4935-97b3-687120d6c6af/nvv4l2decoder:decoder_decoder_f572db21-081b-4935-97b3-687120d6c6af:

Again, all of this won’t happen using the legacy streammux.

I was relating the issues with switching the GPU allocation, but probably is something else, as you say there is no need for conversion.

Lastly, the convertor that we use on our production pipeline, will also rotate the stream (if needed) so I do need a convertor somewhere in the decoding pipeline.

Important to note that our production code runs with python.

Can you share sample code to reproduce the problem? I want to confirm if this is a bug.

From your error log, it seems that there is insufficient video memory.

you can try:

#!/bin/bash
clear
export GST_DEBUG_FILE=""
export GST_DEBUG=2
export USE_NEW_NVSTREAMMUX=yes

RTSP_URL=$1
CODEC=$2
ROTATION=$3
OUTPUT_FILE=$4

echo "Saving rtsp: $RTSP_URL with codec $CODEC and rotation $ROTATION to $OUTPUT_FILE"

if [ "$CODEC" == "h264" ]; then
    PAYLOAD="rtph264pay"
    DEPAY="rtph264depay"
    PARSE="h264parse"
    ENCODING_NAME="H264"
elif [ "$CODEC" == "h265" ]; then
    PAYLOAD="rtph265pay"
    DEPAY="rtph265depay"
    PARSE="h265parse"
    ENCODING_NAME="H265"
else
    echo "Invalid codec type. Please specify h264 or h265."
    exit 1
fi


gst-launch-1.0 -e rtspsrc location=$RTSP_URL ! \
    "application/x-rtp,encoding-name=$ENCODING_NAME" ! \
    baseanalyzer rtp-analysis=true ! \
    $DEPAY ! \
    $PARSE ! \
    cameradetails ! \
    queue ! \
    nvv4l2decoder gpu-id=0 low-latency-mode=true ! \
    videorate max-rate=5 skip-to-first=true ! \
    nvvideoconvert name=conv1 gpu-id=0 flip-method=0 ! \
    "video/x-raw(memory:NVMM)" ! \
    queue ! \
    mux.sink_0 \
    nvstreammux name=mux batch-size=10 ! \
    nvvideoconvert name=conv3 gpu-id=0 ! \
    x264enc ! \
    mp4mux ! \
    filesink location=$OUTPUT_FILE

It will WARN:

0:00:01.370973270 63357 0x5aae936f8ea0 WARN          nvvideoconvert gstnvvideoconvert.c:1957:gst_nvvideoconvert_fixate_caps:<conv1> nvbuf-memory-type property is set based on SRC caps. Property config setting (if any) is overridden!!
0:00:01.370982720 63357 0x5aae936f8ea0 WARN          nvvideoconvert gstnvvideoconvert.c:1962:gst_nvvideoconvert_fixate_caps:<conv1> gpu-id property is set based on SRC caps. Property config setting (if any) is overridden!!
0:00:01.371155690 63357 0x7b2508028520 WARN          v4l2bufferpool gstv4l2bufferpool.c:1565:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:01.371545270 63357 0x7b2508028520 WARN          nvvideoconvert gstnvvideoconvert.c:1957:gst_nvvideoconvert_fixate_caps:<conv1> nvbuf-memory-type property is set based on SRC caps. Property config setting (if any) is overridden!!
0:00:01.371555180 63357 0x7b2508028520 WARN          nvvideoconvert gstnvvideoconvert.c:1962:gst_nvvideoconvert_fixate_caps:<conv1> gpu-id property is set based on SRC caps. Property config setting (if any) is overridden!!
0:00:01.371758660 63357 0x7b2508028520 WARN          nvvideoconvert gstnvvideoconvert.c:1957:gst_nvvideoconvert_fixate_caps:<conv1> nvbuf-memory-type property is set based on SRC caps. Property config setting (if any) is overridden!!
0:00:01.371767050 63357 0x7b2508028520 WARN          nvvideoconvert gstnvvideoconvert.c:1962:gst_nvvideoconvert_fixate_caps:<conv1> gpu-id property is set based on SRC caps. Property config setting (if any) is overridden!!

Which in my experience leads to crossed-memory allocation error.

Two important notes:

  1. This was not happening on ds6.2
  2. This will not happen with legacy nvstreammux

This is an issue of DS-6.2. On a multi-GPUs platform, must check whether the element is working on the same GPU.

Legacy nvstreammux will use the GPU for scaling processing, but new streammux is only responsible for generating batches and does not use the GPU.

I can’t run the script because of the customized element. In fact, these warnings will not have any impact, it is normal.

So if I understand correctly, on DS6.2 there is no warning due to an issue. And from DS6.3 on, the software will allocate the GPU based on caps? If so, how can I make sure that it’s using the GPU I want to set programmatically? Is there a way to set this on the caps? Why running the same software on DS6.3 will end up in a CUDA error due to memory allocation when DS6.2 does not?

If it helps, removing the custom elements from the pipeline will still reproduce the issue (my bad on that one).

DS-6.2 does not add warning logs, these logs are added in DS-6.3

This is a problem negotiated by gstreamer element. nvvideoconvert must work on the same GPU as nvstreammux.

In addition, if you need to force the GPU to be specified, you can use this environment variable, which ensures that all elements work on the same GPU.

export CUDA_VISIBLE_DEVICES="gpu_id"

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.