How to crop detected object using deepstream and deliver it to segmentation

Hi I’m trying to create pipeline with this plan:

*Get mp4 or avi file
*Detect objects on each frame using Yolov5
*Centercrop (512x512) each object
*Segment objects using Unet
*Count objects

I used deepstream_python_apps and got 2 diffirent piplines now: first object detector (test1 example) and segmentation.
I can’t understand how can I merge these piplines to one

I know that i can get frame using pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
But i can’t understand how can I deliver it to next nvinfer bin

You can use PGIE + SGIE mode. Please refer to deepstream-test2 sample. Python Sample Apps and Bindings Source Details — DeepStream 6.1.1 Release documentation

Ok, i’ve writted this code:

# Create gstreamer elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    
    
    uri_name=os.path.join(args[1])
    if uri_name.find("file://") == 0 :
        is_live = True
    source_bin=create_source_bin(0, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    padname="sink_%u" %0
    sinkpad= streammux.get_request_pad(padname) 
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad=source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)
    # Countinue init elements
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    nvseginfer = Gst.ElementFactory.make("nvinfer", "segmentation-inference")
    if not nvseginfer:
        sys.stderr.write(" Unable to create nvseginfer \n")

    nvsegvisual = Gst.ElementFactory.make("nvsegvisual", "nvsegvisual")
    if not nvsegvisual:
        sys.stderr.write("Unable to create nvsegvisual\n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])

    # Setting options on elements
    streammux.set_property('width', 512)
    streammux.set_property('height', 512)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "config_yolo.txt")
    nvseginfer.set_property('config-file-path', "config_unet.txt")
    nvsegvisual.set_property('width', 512)
    nvsegvisual.set_property('height', 512)

    config = configparser.ConfigParser()
    config.read('tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    # Adding elements
    print("Adding elements to Pipeline \n")


    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(nvseginfer)
    pipeline.add(nvsegvisual)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # Linking elements
    print("Linking elements in the Pipeline \n")
    

    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(nvseginfer)
    nvseginfer.link(nvsegvisual)
    nvsegvisual.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

But segmentation worked on original image not on the cropped object. I didn’t change tracker’s yaml files

I’ve tried to make a custom plugin using as example gst-dsexample.

I found this code

static GstFlowReturn
blur_objects (GstDsExample * dsexample, gint idx,
    NvOSD_RectParams * crop_rect_params, cv::Mat in_mat)
{
  cv::Rect crop_rect;

  if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0)) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("%s:crop_rect_params dimensions are zero",__func__), (NULL));
    return GST_FLOW_ERROR;
  }

/* rectangle for cropped objects */
  crop_rect = cv::Rect (crop_rect_params->left, crop_rect_params->top,
  crop_rect_params->width, crop_rect_params->height);

/* apply gaussian blur to the detected objects */
  GaussianBlur(in_mat(crop_rect), in_mat(crop_rect), cv::Size(15,15), 4);

  return GST_FLOW_OK;
}

I can’t understand why if I will write this code:

in_mat = in_mar(crop_rect)

out image will not be detected object?

How in this script i can change in_mat?

Well, is it actual now? I need the same thing. But I need to display only one cropped object and pass it to segmentation and after display only cropped segmentation image

To make your segmentation work in SGIE mode, you need to set “process-mode=2” in the nvinfer config file. It is set by the config file but not python code.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#id2

Before you start to write your own app, please make sure you have understand all features of all DeepStream plugins.

Very thank you for help. After editing secondary inference config i got an error

0:00:08.797244336  8927     0x2d4b8b20 WARN                 nvinfer gstnvinfer.cpp:1376:convert_batch_and_push_to_input_thread:<secondary1-nvinference-engine> error: NvBufSurfTransform failed with error -2 while converting buffer
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -2 while converting buffer (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1376): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine
0:00:08.817897120  8927     0x2d4b8b20 WARN                 nvinfer gstnvinfer.cpp:1376:convert_batch_and_push_to_input_thread:<secondary1-nvinference-engine> error: NvBufSurfTransform failed with error -3 while converting buffer

** (python3:8927): WARNING **: 16:20:17.449: Use gst_egl_image_allocator_alloc() to allocate from this allocator
0:00:08.839765448  8927     0x2d4b9e30 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<secondary1-nvinference-engine> error: Internal data stream error.
0:00:08.839808617  8927     0x2d4b9e30 WARN                 nvinfer gstnvinfer.cpp:2288:gst_nvinfer_output_loop:<secondary1-nvinference-engine> error: streaming stopped, reason error (-5)

** (python3:8927): WARNING **: 16:20:17.452: Use gst_egl_image_allocator_alloc() to allocate from this allocator

** (python3:8927): WARNING **: 16:20:17.455: Use gst_egl_image_allocator_alloc() to allocate from this allocator
Segmentation fault (core dumped)

I changed input-object-min-width but now i’ve got fullblack image with object’s bbox.

Please refer to deepstream_python_apps/apps/deepstream-test2 at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

I did the same

import os

import sys
sys.path.append('../')

import gi
gi.require_version('Gst', '1.0')

from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
from common.FPS import GETFPS
import configparser
import numpy as np
import cv2

import pyds

PGIE_CLASS_ID_BOTTOM = 0
PGIE_CLASS_ID_HOOK = 1
PGIE_CLASS_ID_PRESS_B = 2

COLORS = [[255, 255, 255], [0, 0, 128], [0, 128, 128], [128, 0, 0],
          [128, 0, 128], [128, 128, 0], [0, 128, 0], [0, 0, 64],
          [0, 0, 192], [0, 128, 64], [0, 128, 192], [128, 0, 64],
          [128, 0, 192], [0, 0, 0]]


def map_mask_as_display_bgr(mask):
    """ Assigning multiple colors as image output using the information
        contained in mask. (BGR is opencv standard.)
    """
    # getting a list of available classes
    shp = mask.shape
    bgr = np.zeros((shp[0], shp[1]))
    bgr[mask == 0] = 255
    return bgr

fps_stream=0

def cb_newpad(decodebin, decoder_src_pad,data):
    print("In cb_newpad\n")
    caps=decoder_src_pad.get_current_caps()
    gststruct=caps.get_structure(0)
    gstname=gststruct.get_name()
    source_bin=data
    features=caps.get_features(0)

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    print("gstname=",gstname)
    if(gstname.find("video")!=-1):
        # Link the decodebin pad only if decodebin has picked nvidia
        # decoder plugin nvdec_*. We do this by checking if the pad caps contain
        # NVMM memory features.
        print("features=",features)
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad=source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)

def create_source_bin(index,uri):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri",uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added",cb_newpad,nbin)
    uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin,uri_decode_bin)
    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

def find_number_of_clusters(mask):
  thresh_count = 0
  # Make 4 parts and check every
  for half in range(4):
    half_mask = mask[:, half*128:(half+1) * 128]

    # Threshhold to binary
    thresh = cv2.threshold(half_mask, 100, 255, cv2.THRESH_BINARY)[1]
    erosion_kernel = np.ones((3,3),np.uint8)
    erosion = cv2.erode(thresh,erosion_kernel, iterations = 1)

    contours = cv2.findContours(erosion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    contours = contours[0] if len(contours) == 2 else contours[1]

    index = 1
    isolated_count = 0
    cluster_count = 0
    for cntr in contours:
      area = cv2.contourArea(cntr)
      convex_hull = cv2.convexHull(cntr)
      convex_hull_area = cv2.contourArea(convex_hull)

      if(convex_hull_area == 0):
        convex_hull_area = 1
      if(area == 0):
        area = 1

      # Find a box around the area and compare to area
      ratio = area / convex_hull_area
      # If area less then box it is not a noize
      if ratio < 0.90:
        cluster_count = cluster_count + 1
      # And if area close to box area it mostly noize
      else:
          isolated_count = isolated_count + 1

      index = index + 1

    # If there is one bottom increase thresh_counter
    if(cluster_count > 1):
      thresh_count += 1
    # If there is more then one (mostly 2) decrease thresh_counter
    else:
      thresh_count -= 1

  return 1 if thresh_count <= 0 else 2

def osd_sink_pad_buffer_probe(pad,info,u_data):
    # print("Pad: " ,dir(pad))
    # print("Info: ", dir(info))
    obj_counter = {
        PGIE_CLASS_ID_BOTTOM:0,
        PGIE_CLASS_ID_HOOK:0,
        PGIE_CLASS_ID_PRESS_B:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    #print("Buffer: ", dir(gst_buffer))
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            # print("Base_meta: ", dir(frame_meta.base_meta))
            # print("Buf_pts: ", frame_meta.buf_pts)
        except StopIteration:
            break

        num_rects = frame_meta.num_obj_meta
        l_user = frame_meta.frame_user_meta_list
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
                #print(dir(batch_meta))
                frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
            except StopIteration:
                break
        while l_user is not None:
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting is done by pyds.NvDsUserMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                seg_user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break
            if seg_user_meta and seg_user_meta.base_meta.meta_type == \
                    pyds.NVDSINFER_SEGMENTATION_META:
                try:
                    # Note that seg_user_meta.user_meta_data needs a cast to
                    # pyds.NvDsInferSegmentationMeta
                    # The casting is done by pyds.NvDsInferSegmentationMeta.cast()
                    # The casting also keeps ownership of the underlying memory
                    # in the C code, so the Python garbage collector will leave
                    # it alone.
                    segmeta = pyds.NvDsInferSegmentationMeta.cast(seg_user_meta.user_meta_data)
                except StopIteration:
                    break
                # Retrieve mask data in the numpy format from segmeta
                # Note that pyds.get_segmentation_masks() expects object of
                # type NvDsInferSegmentationMeta
                masks = pyds.get_segmentation_masks(segmeta)
                masks = np.array(masks, copy=True, order='C')
                # map the obtained masks to colors of 2 classes.
                frame_image = map_mask_as_display_bgr(masks).astype(np.uint8)
                # gray_image = cv2.cvtColor(frame_image, cv2.COLOR_BGR2GRAY).astype(np.uint8)
                number_of_clusters = find_number_of_clusters(frame_image)
                print("Hello")
                cv2.imwrite("mask.jpg", frame_image)
            try:
                l_user = l_user.next
            except StopIteration:
                break

        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]

        fps = fps_stream.get_fps()

        py_nvosd_text_params.display_text = "FPS={} Number of Objects={} Bottom_count={} Hook_count={} Press={}".format(fps, num_rects, obj_counter[PGIE_CLASS_ID_BOTTOM], obj_counter[PGIE_CLASS_ID_HOOK], obj_counter[PGIE_CLASS_ID_PRESS_B])

        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10

        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        py_nvosd_text_params.set_bg_clr = 1

        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	


def main(args):
    global fps_stream
    # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
        sys.exit(1)
    is_live = False
    fps_stream=GETFPS(0)
    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    
    
    uri_name=os.path.join(args[1])
    if uri_name.find("file://") == 0 :
        is_live = True
    source_bin=create_source_bin(0, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    padname="sink_%u" %0
    sinkpad= streammux.get_request_pad(padname) 
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad=source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)
    # Countinue init elements
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")

    nvsegvisual = Gst.ElementFactory.make("nvsegvisual", "nvsegvisual")
    if not nvsegvisual:
        sys.stderr.write("Unable to create nvsegvisual\n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])

    # Setting options on elements
    streammux.set_property('width', 1500)
    streammux.set_property('height', 1500)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)
    pgie.set_property('config-file-path', "config_yolo.txt")
    sgie1.set_property('config-file-path', 'config_unet.txt')
    nvsegvisual.set_property('width', 1500)
    nvsegvisual.set_property('height', 1500)
    # Adding elements
    print("Adding elements to Pipeline \n")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)



    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    pipeline.add(nvsegvisual)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # Linking elements
    print("Linking elements in the Pipeline \n")
    

    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(sgie1)
    sgie1.link(nvsegvisual)
    nvsegvisual.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• All config files used in your app

config_unet.txt (3.7 KB)
config_yolo.txt (3.3 KB)
tracker_config.txt (226 Bytes)
tracker_config.yaml (6.6 KB)
config_tracker_NvDCF_perf.yml (4.5 KB)

Sorry for late answer.

Hi, could you answer, pls? I sended to you my config files.

Please provide complete information as applicable to your setup.
Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

• Jetson AGX Xavier
• Deepstream 6.0
• JetPack 4.6
• TensorRT 8.0.1
• NVIDIA GPU Driver 32.6.1

Well, as I understood tracker crops detected object and after resizes it? I need find a center of bbox and crop by the longest side of detection, then resize 512x512. Where I can find a function that makes tracker’s crop and rewrite it?
Or, as I wanted, how can I write plugin that crops the object by myself? I researched every day and found out that image in gstreamer plugins is located at GstBuffer. It is right?

I found out how to crop it. solution was simple. In function blur_objects every changes with in_mat are displays on screen. So I cropped to another variabel my object, resized it to in_map size and overlay resized image on top of in_mat. And it was displayed

There is a sample of peopleSegNet(unet) model configuration deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

If you only want to display one cropped object, you need to write a new crop plugin and use it after PGIE

There is a sample of peopleSegNet(unet) model configuration deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

If you only want to display one cropped object, you need to write a new crop plugin and use it after PGIE

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.