RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA color Format

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) A4000
• DeepStream Version 6.3
• TensorRT Version 8.6
• NVIDIA GPU Driver Version (valid for GPU only) 550
I am try to run deepstream python pipeline for person detection and face detection
with face save in director

streammux.link(pgie)
pgie.link(tracker)
tracker.link(nvvidconv1)
nvvidconv1.link(sgie1)
sgie1.link(nvvidconv)
# nvvidconv.link(filter1)
# filter1.link(nvvidconv1)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)

this pipeline i am using but when i uncommit this

nvvidconv.link(filter1)

# filter1.link(nvvidconv1)

this link got error
multiple_input_1.txt (19.8 KB)

Currently pyds.get_nvds_buf_surface only support RGBA, so your need add nvvideoconvert and capsfilter elements at upstream of nvdsosd element.

1.add the capsfilter.

caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
    filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
    if not filter1:
        sys.stderr.write(" Unable to get the caps filter1 \n")
    filter1.set_property("caps", caps1)

2.modify the pipeline.

nvvidconv.link(filter1)
filter1.link(nvosd)

nvvidconv.link(filter1)
filter1.link(nvosd)

When I linked this pipeline, I got none of obj_meta.parent. When I removed filter 1, I got access to parent data but could not save images.

The two are not related.

  1. If you need back to back detector, please refer to this example.Object meta parent is not None only when the main detector exists.

2.If you want to save the results, please refer to this sample

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream-redaction/deepstream_imagedata-multistream_redaction.py

Yes, I am referencing this: if (obj_meta->unique_component_id == PRIMARY_DETECTOR_UID) {
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
vehicle_count++;
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON)
person_count++;
}

if (obj_meta->unique_component_id == SECONDARY_DETECTOR_UID) {
if (obj_meta->class_id == SGIE_CLASS_ID_FACE) {
face_count++;
/* Print this info only when operating in secondary model. */
if (SECOND_DETECTOR_IS_SECONDARY)
g_print (“Face found for parent object %p (type=%s)\n”,
obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]);
}

code in c and its working, however, when I add filer in gstream pipeline, I get parent values that are none.

can can check my deepstream python code and pipeline
multiple_input_1.txt (19.8 KB)

This may be an issue with nvvideoconvert.

After passing through nvvideoconvert element, secondary_detector will lose parent information.

But if you just want to extract faces, you don’t have to care about this information.

back-to-back.tar.gz (7.0 KB)

First, prepare the model according to the above steps,then put it to /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps

Run this command line.

 python3 b2b.py your_video.h264

The face will be extracted into a picture

Actually, my use case is to extract faces using person object IDs, which is why I want parent meta data.
based on the parent object ID I am extracting face

There is a workaround

secondary_detector  --> osd --> nvvideoconvert --> caps(to RGBA) --> sink
                     |                                            |
                     |                                            |
             add probe record object id          Save the object according to the 
                                                 object id recorded in the upstream
             which you want save

ok i will try

when i add probe in secondary detector got only person label name in object meta data

There is no problem here.

I modified this sample and was able to get the value of obj_meta.parent.class_id correctly.

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import configparser

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.platform_info import PlatformInfo
from common.bus_call import bus_call

import numpy as np
import pyds
import cv2
from os import path

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3

PRIMARY_DETECTOR_ID = 1
SECONDARY_DETECTOR_ID = 2

face_count = 0

def sgie_src_pad_buffer_probe(pad,info,u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                if obj_meta.unique_component_id == SECONDARY_DETECTOR_ID:
                    print(f"parent cls id = {obj_meta.parent.class_id} ===> {obj_meta.parent.class_id == PGIE_CLASS_ID_PERSON}")
            except StopIteration:
                break
            try:
                l_obj=l_obj.next
            except StopIteration:
                break
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        vehicle_count = 0
        person_count = 0
        global face_count
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                if obj_meta.unique_component_id == PRIMARY_DETECTOR_ID:
                    # print("meta from primary detector ")
                    if obj_meta.class_id == PGIE_CLASS_ID_VEHICLE:
                        vehicle_count += 1
                    elif obj_meta.class_id == PGIE_CLASS_ID_PERSON:
                        person_count += 1
                elif obj_meta.unique_component_id == SECONDARY_DETECTOR_ID:
                    # print("meta from secondary detector ")
                    face_count += 1

                    if face_count % 30 == 0:
                        # Getting Image data using nvbufsurface
                        # the input should be address of buffer and batch_id
                        n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
                        n_frame = crop_object(n_frame, obj_meta)
                        # convert python array into numpy array format in the copy mode.
                        frame_copy = np.array(n_frame, copy=True, order='C')
                        # convert the array into cv2 default color format
                        frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
                        if platform_info.is_integrated_gpu(): # If Jetson, since the buffer is mapped to CPU for retrieval, it must also be unmapped 
                            pyds.unmap_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id) # The unmap call should be made after operations with the original array are complete.
                                                                                                #  The original array cannot be accessed after this call.
                        img_path = f"face_{face_count}.jpg"
                        cv2.imwrite(img_path, frame_copy)
                    #print(f"parent cls id = {obj_meta.parent.class_id} ===> {obj_meta.parent.class_id == PGIE_CLASS_ID_PERSON}")
            except StopIteration:
                break
            try:
                l_obj=l_obj.next
            except StopIteration:
                break
        print(f"Frame Number={frame_number} Number of Objects={num_rects} Vehicle_count={vehicle_count} Person_count={person_count}")
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

def crop_object(image, obj_meta):
    rect_params = obj_meta.rect_params
    top = int(rect_params.top)
    left = int(rect_params.left)
    width = int(rect_params.width)
    height = int(rect_params.height)

    crop_img = image[top:top+height, left:left+width]

    return crop_img

def main(args):
    # Check input arguments
    if (len(args)<2):
        sys.stderr.write("usage: %s <h264_elementary_stream>\n" % args[0])
        sys.exit(1)

    global platform_info
    platform_info = PlatformInfo()
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    primary_detector = Gst.ElementFactory.make("nvinfer", "primary-inference-engine1")
    if not primary_detector:
        sys.stderr.write(" Unable to create primary_detector \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    secondary_detector = Gst.ElementFactory.make("nvinfer", "primary-nvinference-engine2")
    if not secondary_detector:
        sys.stderr.write(" Unable to make secondary_detector \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
    filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
    if not filter1:
        sys.stderr.write(" Unable to get the caps filter1 \n")
    filter1.set_property("caps", caps1)

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if platform_info.is_integrated_gpu():
        print("Creating nv3dsink \n")
        sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
        if not sink:
            sys.stderr.write(" Unable to create nv3dsink \n")
    else:
        if platform_info.is_platform_aarch64():
            print("Creating nv3dsink \n")
            sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
        else:
            print("Creating EGLSink \n")
            sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
        if not sink:
            sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    #Set properties of pgie and sgie
    primary_detector.set_property('config-file-path', "primary_detector_config.txt")
    secondary_detector.set_property('config-file-path', "secondary_detector_config.txt")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)

    if not platform_info.is_integrated_gpu():
        # Use CUDA unified memory in the pipeline so frames
        # can be easily accessed on CPU in Python.
        mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
        streammux.set_property("nvbuf-memory-type", mem_type)
        nvvidconv.set_property("nvbuf-memory-type", mem_type)
        if platform_info.is_wsl():
            #opencv functions like cv2.line and cv2.putText is not able to access NVBUF_MEM_CUDA_UNIFIED memory
            #in WSL systems due to some reason and gives SEGFAULT. Use NVBUF_MEM_CUDA_PINNED memory for such
            #usecases in WSL. Here, nvvidconv1's buffer is used in tiler sink pad probe and cv2 operations are
            #done on that.
            print("using nvbuf_mem_cuda_pinned memory for nvvidconv1\n")
            vc_mem_type = int(pyds.NVBUF_MEM_CUDA_PINNED)
            nvvidconv.set_property("nvbuf-memory-type", vc_mem_type)
        else:
            nvvidconv.set_property("nvbuf-memory-type", mem_type)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(primary_detector)
    pipeline.add(tracker)
    pipeline.add(secondary_detector)
    pipeline.add(nvvidconv)
    pipeline.add(filter1)
    pipeline.add(nvosd)
    pipeline.add(sink)

    # we link the elements together
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(primary_detector)
    primary_detector.link(tracker)
    tracker.link(secondary_detector)
    secondary_detector.link(nvvidconv)
    nvvidconv.link(filter1)
    filter1.link(nvosd)
    nvosd.link(sink)

    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    sgiesrcpad = secondary_detector.get_static_pad("src")
    if not sgiesrcpad:
        sys.stderr.write(" Unable to get src pad of secondary_detector \n")
    sgiesrcpad.add_probe(Gst.PadProbeType.BUFFER, sgie_src_pad_buffer_probe, 0)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    print("Starting pipeline \n")
    
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
      loop.run()
    except:
      pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))


There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.