(Gst-nvtracker) Tracking of manually inserted objects

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Titan RTX / OrinAGX
• DeepStream Version

=6.2
• JetPack Version (valid for Jetson only)
5.1.2
• TensorRT Version
does not matter
• NVIDIA GPU Driver Version (valid for GPU only)
does not matter
• Issue Type( questions, new requirements, bugs)
Questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,

I am interested in using the deepstream gst-nvtracker plugin for tracking manually defined bounding boxes (user-input). For now I want to avoid using the inference plugin gst-nvinfer, as I only need the tracking functionality.

My approach is to insert the bounding box coordinates into the metadata once (like gst-nvinfer does), and set probationAge=1 to prevent problems caused by LateActivation. This way, the tracker will allways operate in ShadowTracking mode. However, I am not sure how to set the maxShadowTrackingAge parameter to infinity, or if that is even possible. Is there a way to achieve this?

Also, I would appreciate any additional information, instructions or examples on how to get this running.

Thank you for your time and assistance.

Can you share more details on why you need use nvtracker like this? Why not use nvinfer in your project? It shoud works fine if you set probationAge=1. You can set maxShadowTrackingAge for shadow tracking length. Regarding shadow tracking, please refer: Gst-nvtracker — DeepStream documentation 6.4 documentation

Thank you for your answer.
A potential application involves camera stabilization and object tracking with unknown objects in industrial inspection. The robot should align the camera such that the objects selected by the user always remain in the center of the image, even if the robot moves or the environment changes. Since we don’t know the objects, we cannot yet train a specific detector to run nvinfer.
I would like to use gst-nvtracker because it is a flexible, state-of-the-art tracker that is highly optimized for the Jetson architecture. In addition, the application could be easily extended with other DeepStream plugins like nvinfer, nvof, etc.
Unfortunately, the documentation for shadow tracking does not specify what the maximum value of maxShadowTrackingAge is. Is there a way to run shadow tracking indefinitely?

ShadowTracking works when the a target is still being tracked in the background for a period of time even when the target is not associated with a detector object. The big maxShadowTrackingAge will consume more compute power and memory. Why you need such big maxShadowTrackingAge? Suppose you use case should works same as normal nvtracker in DeepStream sample pipeline even if manually insert objects.

According to the state diagram in deepstream documentation, the tracker will never reset shadowTrackingAge in our case. Also, I think probationAge has to be 0 to leave Tentative mode, not 1.

I started modifying the deepstream-test1 application to test this. I’m using sample_ride_bike.mov from sample streams (converted to .h264) because the scene only shows one moving person. I extracted the bounding box information of frame 0 using src pad probe on nvinfer element.

After that, I added nvtracker to the pipeline (copy-paste from deepstream-test2) and added a pad probe on nvtracker sink pad that adds the bounding box I extracted from nvinfer meta once (see code below). The resulting pipeline is:
file-source -> h264-parser -> nvh264-decoder -> nvstreammux -> nvtracker -> nvvideoconvert -> nvosd -> video-renderer
However, the default osd_sink_pad_buffer_probe does not report any objects. It seems like the object gets dropped by nvtracker, because it is gone already on nvtracker src pad. Am I missing some steps or could the data be invalid?

This is the pad probe function I’m using to add the object:

static bool meta_attached = false;

static GstPadProbeReturn
nvtracker_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  if (meta_attached) {
    return GST_PAD_PROBE_OK;
  }
  else {

    GstBuffer *buf = (GstBuffer *) info->data;
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    // get frame meta
    NvDsFrameMeta *frame_meta = NULL;    
    frame_meta = (NvDsFrameMeta *) batch_meta->frame_meta_list->data;

    NvDsObjectMeta *object_meta = NULL;
    static gchar font_name[] = "Serif";

    // print message
    g_print("Tracker pad probe received, attaching metadata! \n");
    
    // TODO: is meta locking needed?
    // nvds_acquire_meta_lock (batch_meta);
    object_meta = nvds_acquire_obj_meta_from_pool(batch_meta);

    object_meta->confidence = 0.7;
    object_meta->class_id = PGIE_CLASS_ID_PERSON;
    object_meta->object_id = UNTRACKED_OBJECT_ID;
    object_meta->unique_component_id = 1;
    strcpy(object_meta->obj_label, "person");


    // C++ Reference does not work in C
    // NvOSD_TextParams & text_params = object_meta->text_params;
    NvOSD_TextParams *text_params = &(object_meta->text_params);
    NvOSD_RectParams *rect_params = &(object_meta->rect_params);
    NvDsComp_BboxInfo *det_bbox = &(object_meta->detector_bbox_info);

    /* Assign bounding box coordinates */
    // Rect Params for sample_ride_bike:
    // left = 289.514923, top = 134.535904, width = 493.972076, height = 831.827454 
    det_bbox->org_bbox_coords.left = 289.514923;
    det_bbox->org_bbox_coords.top = 134.535904;
    det_bbox->org_bbox_coords.width = 493.972076;
    det_bbox->org_bbox_coords.height = 831.827454;

    rect_params->left = 289.514923;
    rect_params->top = 134.535904;
    rect_params->width = 493.972076;
    rect_params->height = 831.827454;

    /* Semi-transparent yellow background */
    rect_params->has_bg_color = 0;
    // rect_params->bg_color = (NvOSD_ColorParams) {
    // 1, 1, 0, 0.4};

    /* Red border of width 6 */
    rect_params->border_width = 3;
    rect_params->border_color = (NvOSD_ColorParams) {
    1, 0, 0, 1};

    /* display_text required heap allocated memory */
    text_params->display_text = g_strdup (object_meta->obj_label);
    /* Display text above the left top corner of the object */
    text_params->x_offset = rect_params->left;
    text_params->y_offset = rect_params->top - 10;
    /* Set black background for the text */
    text_params->set_bg_clr = 1;
    text_params->text_bg_clr = (NvOSD_ColorParams) {
    0, 0, 0, 1};
    /* Font face, size and color */
    text_params->font_params.font_name = font_name;
    text_params->font_params.font_size = 11;
    text_params->font_params.font_color = (NvOSD_ColorParams) {
    1, 1, 1, 1};

    nvds_add_obj_meta_to_frame(frame_meta, object_meta, NULL);
    // set flag to only attach metadata once
    meta_attached = true;
    // TODO: is meta locking needed?
    // nvds_release_meta_lock(batch_meta);

    return GST_PAD_PROBE_OK;
  }
}

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Can you have a try to set bInferDone = true? NVIDIA DeepStream SDK API Reference: _NvDsFrameMeta Struct Reference

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.