Deepstream save detected image to disk

Hardware Platform Jetson AGX Orin
convert.deepstream_test_3.py (21.6 KB)

• DeepStream 6.2
**• JetPack Version not sure, **
• Tegra: 35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023
• TensorRT Version: 8.5.2-1+cuda11.4

My video stream is in YUV color space. It works with inference but since
I need to save the detected image to disk I would like to know how I can change the color space from NV12 (as I assume the input is to the tiler) to RGBA for the get_nvds_buf_Surface

• Issue Type( questions, new requirements, bugs)
I get RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format
when running deepstream-test3.py.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
By running deepstream-test3.py with yoloV8 inference,by Marcos Luciano and Ultralyctics. Use a video stream with YUV color space.

I did this:

...
    print("Creating Pgie \n ")
    reqPgie = ""
    if requested_pgie != None and (requested_pgie == 'nvinferserver' or requested_pgie == 'nvinferserver-grpc') :
        reqPgie = "nvinferserver"
        pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
    elif requested_pgie != None and requested_pgie == 'nvinfer':
        reqPgie = "nvinfer"
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    else:
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    print("Requested pgie: " + reqPgie)
    if not pgie:
        sys.stderr.write(" Unable to create pgie :  %s\n" % requested_pgie)

    # Add nvvidconv1 and filter1 to convert the frames to RGBA
    # which is easier to work with in Python.
    print("Creating nvvidconv1 \n ")
    nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "convertor1")
    if not nvvidconv1:
        sys.stderr.write(" Unable to create nvvidconv1 \n")
    print("Creating filter1 \n ")
    caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
    filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
    if not filter1:
        sys.stderr.write(" Unable to get the caps filter1 \n")
    filter1.set_property("caps", caps1)

    if disable_probe:
        # Use nvdslogger for perf measurement instead of probe function
        print ("Creating nvdslogger \n")
        nvdslogger = Gst.ElementFactory.make("nvdslogger", "nvdslogger")

    print("Creating tiler \n ")
    tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")
...

    pipeline.add(pgie)
    if nvdslogger:
        pipeline.add(nvdslogger)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(filter1)
    pipeline.add(nvvidconv1)
    pipeline.add(nvosd)
    pipeline.add(sink)

...
    print("Linking elements in the Pipeline \n")
    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2)
 
    queue2.link(nvvidconv1)
    nvvidconv1.link(queue3)
    queue3.link(filter1)
    filter1.link(queue4)

#    if nvdslogger:
#        queue4.link(nvdslogger)
#        nvdslogger.link(tiler)
#    else:
    queue4.link(tiler)

    tiler.link(queue5)
    queue5.link(nvvidconv)
    nvvidconv.link(queue6)
    queue6.link(nvosd)
    nvosd.link(queue7)
    queue7.link(sink)

I get this error:

In cb_newpad

gstname= video/x-raw
In cb_newpad

In cb_newpad
gstname= video/x-raw

features= <Gst.CapsFeatures object at 0xffffab9ba940 (GstCapsFeatures at 0xfffed8014a40)>
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffffab9ba160 (GstCapsFeatures at 0xfffed8015380)>
features= <Gst.CapsFeatures object at 0xffffab9baa60 (GstCapsFeatures at 0xfffed0015180)>

**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0, ‘stream2’: 0.0}

Traceback (most recent call last):
File “convert.deepstream_test_3.py”, line 182, in pgie_src_pad_buffer_probe
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format
Traceback (most recent call last):
File “convert.deepstream_test_3.py”, line 182, in pgie_src_pad_buffer_probe
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

If this is the case that nvinfer only supports bgr and rgb. Why does get_nvds_buf_Surface only support RGBA? how to convert this?

I found this, but how can I get both meta data and the image if I connect the buffer probe to the nvmultistreamtiler instead of gpie?

@marcoslucianops can you assist?

I moved the conversion of the image before the pgie and now the error is gone.
Checking output results and will mark this as solution and provide code if it works…

Changing from yolov8s-onnx (ultralytics version) to own trained yolov8 causes
Segmentation fault (core dumped)

I have found something about exporting to ONNX with “dynamic” property. I dont know what that
means in practice but it might do the trick… More to come…

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Glad to hear that.

No. Nvinfer accepts batched NV12/RGBA buffers from upstream. Because currently only binding in this format has been done.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.