RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

Hello everyone,

I am encountering the following error in my DeepStream pipeline:
in pgie_src_pad_buffer_probe

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

This is my pipeline:

    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(tracker)
    tracker.link(queue2)
    queue2.link(tiler)
    tiler.link(queue3)
    queue3.link(nvvidconv)
    nvvidconv.link(queue4)
    queue4.link(nvosd)
    nvosd.link(queue5)
    queue5.link(nvvidconv2)
    nvvidconv2.link(capsfilter)
    capsfilter.link(encoder)
    encoder.link(container)
    container.link(sink)

I am using deepstream integrated yolov9s model as our pgie. I need to extract frames from NvBufSurface to draw bounding boxes on them.

Here’s what I’ve tried so far:

  1. Changing caps to:
    caps = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")

  2. Testing various encoders such as nvv4l2h264enc, nvv4l2h265enc, pngenc, jpegenc, etc. But It didn’t work

Our setup
• Hardware Platform Tesla T4
• DeepStream Version 7.0
• TensorRT Version 8.6.1.6+cuda12.0
• NVIDIA GPU Driver Version 560.35.03
• Issue Type bug
• Steps to reproduce the error

  1. Keep all config files and code in one folder
  2. git clone GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
  3. docker pull nvcr.io/nvidia/deepstream:7.0-triton-multiarch
  4. docker run -itd --gpus all -w /workspace --net=host -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $PWD:/workspace --privileged --name deepstream-latest-experiment-new nvcr.io/nvidia/deepstream:7.0-triton-multiarch
  5. sudo docker exec -it deepstream-latest-experiment-new bash
  6. wget https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.1.10/pyds-1.1.10-py3-none-linux_x86_64.whl
  7. pip3 install pyds-1.1.10-py3-none-linux_x86_64.whl cuda-python
  8. replace all the defualt test3 files with attached files(cp -r deepstream.py dstest2_tracker_config.txt config_infer_primary_yoloV8.txt /workspace/deepstream_python_apps/apps/deepstream-test3/)
  9. python3 deepstream.py -i file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h265.mp4 --no-display --pgie nvinfer -c config_infer_primary_yoloV8.txt

files:
deepstream_python_code.txt (19.2 KB)
config_infer_primary_yoloV8.txt (761 Bytes)
dstest2_tracker_config.txt (262 Bytes)

How can I modify my code to extract all frames successfully?
I’ve attached the code for reference.

Thank you in advance for your help!
cc: @kayccc @Fiona.Chen @fanzh @yuweiw

Since you added the probe function to the src_pad of the pgie, you should add the convertion before the pgie.
...streammux->nvvideoconvert->capsfilter->pgie...

Our demo deepstream-imagedata-multistream added the probe function to the src_pad of the tiler, so we add the conversion at the end.

1 Like

@yuweiw Thanks for replying

I have added conversion before pgie as you said here’s my pipeline now:

streammux.link(queue1)
queue1.link(nvvidconv)
nvvidconv.link(capsfilter1)
capsfilter1.link(pgie)
pgie.link(queue2)
queue2.link(tiler)
tiler.link(queue3)
queue3.link(tracker)
tracker.link(queue4)
queue4.link(nvosd)
nvosd.link(queue5)
queue5.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(queue6)
queue6.link(encoder)
encoder.link(container)
container.link(sink)

But at n_frame line:
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
I am getting this error:
Segmentation fault (core dumped)

Here’s Caps Structure before the n_frame line

Caps Structure: video/x-raw, width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)0/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-cuda-device, gpu-id=(int)0;

You can narrow it down by trying the following steps first.

  • 1.Run our deepstream-imagedata-multistream first to verify that it can save images in your environment.

    1. Integrate your model into deepstream-imagedata-multistream demo
    1. Integrate your model into other demos
1 Like

Thanks @yuweiw for your advice

I referred to the deepstream-imagedata-multistream demo. I used unified memory so that the segmentation fault error was solved

but in Output(.mp4), I am not getting any bboxes just a blank input video with no bboxes

You can refer to our FAQ to learn how to save the video. If you want to save that to mp4 format, you can refer to the following pipeline.

...enc->h264parse->qtmux->filesink
1 Like

Thanks @yuweiw for your helpful advice I reviewed the FAQ on saving MP4 videos, but it doesn’t seem to address the current issue…

Previously, we were able to save MP4 videos with bounding boxes without any problems.
However, after modifying the pipeline from:
streammux-->pgie-->tracker-->tiler-->nvvidconv-->nvosd
to:
streammux-->nvvidconv-->capsfilter1-->pgie-->tiler-->tracker-->nvosd

We noticed that the bounding boxes no longer appear in the output. This behavior might be related to the pipeline changes.
Any insights or recommendations would be greatly appreciated.

You can try to switch the positions of tracker and tiler in your pipeline.

1 Like

Thank you so much @yuweiw
I am now getting all the bboxes in output and frames separately using n_frame.

Summary:

1st Error: RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

Solution:

  1. keep nvvideoconvert and capsfilter before pgie
    like this: ...streammux->nvvideoconvert->capsfilter->pgie...

2nd Error:
Segmentation fault (core dumped) at n_frame line:
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

Solution:
Use CUDA unified memory
You can refer this code line 390 deepstream_imagedata-multistream.py

3rd Error: The bounding boxes no longer appear in the output

Solution:
Switch the positions of the tracker and tiler in the pipeline.
from: streammux-->nvvidconv-->capsfilter1-->pgie-->tiler-->tracker-->nvosd

to: streammux-->nvvidconv-->capsfilter1-->pgie-->tracker-->tiler-->nvosd

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.