[Deepstream 6 with Python Binding] HOW to apply tracking on custom inferserver results in deepstream-ssd-parser sample?

• Hardware Platform (Jetson / GPU) GTX2080 Ti
• DeepStream Version 6
• TensorRT Version 8.0.1.6+cuda11.3.1.005
• NVIDIA GPU Driver Version (valid for GPU only) 470.103.01
• Issue Type( questions, new requirements, bugs) question

Hi, I’m using Python API, deepstream-ssd-parser. I want to add tracking functions on the custom nvinferserver output by using nvtracker plugin.

There are some more details about my problem:

  1. The pipeline is .. -> nvinferserver(pgie) -> nvtracker(tracker) -> nvvideoconvert -> nvosd(osd) -> ...
  2. I copied the nvtracker configs from the deepstream_test_2 sample (aka. config_tracker_NvDCF_perf.yml).
  3. In pgie’s pgie_src_pad_buffer_probe function: I have got my custom model outputs correctly.
  4. In pgie’s add_obj_meta_to_frame function:
    • I have filled object_id, unique_component_id, class_id, confidence and detector_bbox_info of obj_meta correctly.
    • I keep the position values of obj_meta.rect_params the same with obj_meta.detector_bbox_info
    • I also leave the obj_meta.tracker_bbox_info and obj_meta.tracker_confidence empty.
  5. I have hooked the sink probe of osd and the src probe of the tracker, but observed that no matter what values of NvDsObjectMeta have been passed by upstream pgie in add_obj_meta_to_frame, the tracker always outputs meaningless tracker_confidence and tracker_bbox_infos (aka. 0.0, [0,0,0,0]). WHY?
  6. Is there any standard description for nvtrack plugin input/output metadata protocols for reference?

My modified add_obj_meta_to_frame is below. The pgie’s downstream nvtracker does not work at all.

def add_obj_meta_to_frame(frame_object, batch_meta, frame_meta, label_names):
    """ Inserts an object into the metadata """
    # this is a good place to insert objects into the metadata.
    # Here's an example of inserting a single object.
    obj_meta = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)

    # gie.unique_id = 1
    obj_meta.unique_component_id = 1 # this value has been confirmed with nvinferserver's config

    # Set object info including class, detection confidence, etc.
    obj_meta.class_id = frame_object.classId  # always = 0, because my model only supports one class (for face detection).
    obj_meta.confidence = frame_object.detectionConfidence

    # There is no tracking ID upon detection. The tracker will
    # assign an ID.
    obj_meta.object_id = UNTRACKED_OBJECT_ID # I guess the value should be correct.

    # Set the object classification label.
    obj_meta.obj_label = 'Face' # there is only one class (face) that the custom model supported.

    # ADDED by me
    detector_bbox_info = obj_meta.detector_bbox_info
    detector_bbox_info.org_bbox_coords.left = float(IMAGE_WIDTH * frame_object.left)
    detector_bbox_info.org_bbox_coords.top = float(IMAGE_HEIGHT * frame_object.top)
    detector_bbox_info.org_bbox_coords.width = float(IMAGE_WIDTH * frame_object.width)
    detector_bbox_info.org_bbox_coords.height = float(IMAGE_HEIGHT * frame_object.height)

    # Set bbox properties. These are in input resolution.
    rect_params = obj_meta.rect_params
    rect_params.left = float(IMAGE_WIDTH * frame_object.left)
    rect_params.top = float(IMAGE_HEIGHT * frame_object.top)
    rect_params.width = float(IMAGE_WIDTH * frame_object.width)
    rect_params.height = float(IMAGE_HEIGHT * frame_object.height)

    # Red border of width 3
    rect_params.border_width = 3
    rect_params.border_color.set(1, 0, 0, 1)

    # Semi-transparent yellow backgroud
    rect_params.has_bg_color = 1
    rect_params.bg_color.set(1, 1, 0, 0.4)

    # Set display text for the object.
    txt_params = obj_meta.text_params
    if txt_params.display_text:
        pyds.free_buffer(txt_params.display_text)

    txt_params.x_offset = int(rect_params.left)
    txt_params.y_offset = max(0, int(rect_params.top) - 10)
    txt_params.display_text = (
        "Face " + "{:04.3f} {}".format(frame_object.detectionConfidence, obj_meta.object_id)
    )
    # # Font , font-color and font-size
    txt_params.font_params.font_name = "Serif"
    txt_params.font_params.font_size = 10
    # set(red, green, blue, alpha); set to White
    txt_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

    # # Text background color
    txt_params.set_bg_clr = 1
    # set(red, green, blue, alpha); set to Black
    txt_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)

    # Inser the object into current frame meta
    # This object has no parent
    pyds.nvds_add_obj_meta_to_frame(frame_meta, obj_meta, None)

Any suggestion or reference sample would be much appreciated!

Sorry for the late response, is this still an issue to support? Thanks

Hi @neoragex2002 ,
Sorry for delay!

Can you refer to the tracker usage in the samples under /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app ?

I think this should be expected since NvDCF has visual tracking - Gst-nvtracker — DeepStream 6.3 Release documentation which will make the objects you hooked invalid.

If you only want location tracking, you may could try IOU tracker

I have solve that issue since I found that NvMetaFrameData.bInferDone was not be set to True after the customed inference post-processing. The python sample of the add_obj_meta_to_frame func in deepstream-ssd-parser is not completed. I suggest to have some fixes in that sample function.

Thank you for your attention!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.