Sustain bbox by tracker in DeepStream 5.1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2
• NVIDIA GPU Driver Version (valid for GPU only) 460.73
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I use DeepStream with Yolo4 engine and I want to sustain bounding box from detector when detector doesn’t return box for example for 1 or 2 frames. I use NvDCF tracker but it return bounding boxes only when tracker is in active mode, so when detector also detect object. I set minTrackingConfidenceDuringInactive to 0.1 but I get warning [NvDCF][Warning] minTrackingConfidenceDuringInactive is deprecated. I also tried to use tracker_bbox_info from NvDsObjectMeta but there are also no boxes when detector doesn’t detect object. So what should I do to sustain bbox from detector by tracker? Here are my files for tracker:
tracker_config.yml (6.6 KB)
dstest2_tracker_config.txt (1.8 KB)

Sorry for the late response, we will investigate this issue to do the update soon.

Thanks

Hi, any update?

This is deprecated as it often creates false positives. The current NvDCF tracker implementation provides the outputs only when it is confirmed by the detector, which is why you may see blinking bbox when the inference interval is greater than 0. However, you would notice the object ID is maintained on the display because the tracker keeps track of the object in the Shadow mode even when there’s no detector outputs.

If you enable and use the past-frame data, you can retrieve all the object data tracked in the past frames although they were not displayed for that particular frame. Please search up “past-frame” in DeepStream documentation on tracker at here.

Hi, thanks for reply.
In the meantime I started using past-frame data but it doesn’t make any difference. This is my code for extracting boxes from past frame data and ObjectData. I’m using it in custom plugin which called after tracker in pipeline.

for (l_user=batch_meta->batch_user_meta_list; l_user != NULL; l_user = l_user->next)
     {
       user_meta = (NvDsUserMeta *)(l_user->data);
       if (user_meta && user_meta->base_meta.meta_type ==  NVDS_TRACKER_PAST_FRAME_META)
       {
         pPastFrameObjBatch = (NvDsPastFrameObjBatch *) (user_meta->user_meta_data);
         for (int i = 0; i < pPastFrameObjBatch->numFilled; i++)
         {
           objStream = (pPastFrameObjBatch->list) + i;
           for (int j =0; j < objStream->numFilled; j++)
           {
             objList = (objStream->list) + j;
             if (objList->classId == 0)
             {
               for (int k = 0; k < objList->numObj; k++)
               {
                 object = (objList->list) + k;
                 w = int(object->tBbox.width/2);
                 h = int(object->tBbox.height/2);
                 x = int(object->tBbox.left) + w;
                 y = int(object->tBbox.top) + h;
                 w = w + int(0.15*w);
                 h = h + int(0.15*h);
                 elipses.emplace_back(x,y,w,h);
               }
             }
           }
         }
       }
     }
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
     frame_meta = (NvDsFrameMeta *) (l_frame->data);
     frame_number = frame_meta->frame_num;
     for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
          l_obj = l_obj->next)
      {
        obj_meta = (NvDsObjectMeta *) (l_obj->data);
        if (obj_meta->class_id == 0)
        {
          w = int(obj_meta->rect_params.width/2);
          h = int(obj_meta->rect_params.height/2);
          x = int(obj_meta->rect_params.left) + w;
          y = int(obj_meta->rect_params.top) + h;
          w = w + int(0.15 * w);
          h = h + int(0.15 * h);
          elipses.emplace_back(x,y,w,h);
        }
      }


In further part of code I draw elipses with parameters from box. So it looks like these data contains the same or almost the same boxes as ObjectMeta but if ObjectMeta doesn’t contain box for object, the past frame doesn’t contain this box either. But when I display ids from tracker I see that id for object is the same before and after blinking so it has to be tracked in shadow mode. And here is the question if past frame data contain data for recent frame for which tracker was used or it contain data for previous frame what has no sense because I don’t have access to previous frame. Sorry, maybe I don’t understand something. Here are my new files for tracker:
dstest2_tracker_config.txt (1.8 KB)
tracker_config.yml (6.6 KB)

Looks like you are drawing the bbox from the current frame AND the bbox from the past-frame on the same video frame image. As the name suggests, the past-frame data is the tracked object data corresponding to a past-frame. I noticed that you don’t retrieve the frameNum for the past-frame data. Please utilize object->frameNum to check what frameNum each past-frame data belong to.

Another thing is that if you draw something on realtime only on the latest video frame, the past-frame data can’t be utilized, because it belongs to the past-frames. So if you want to draw smooth trajectories, I would suggest to draw it once the whole pipeline is completed or visualize it with some delay at least by the max shadow tracking age.

Thank you for this answer!
It change a lot in my understanding past-frame data but I’m still a little confusing about access to the previous frames. Could you explain me how to design my pipeline to be able to draw something on previous frames? Should I use queue for it? If yes, how?

Again, please retrieve object->frameNum when you parse the past-frame data. Assuming you have a data-structure that stores the trajectory of an object (sorted based on the frameNum), you can fill it in whenever you receive the past-frame data for that object. When the past-frame data is provided depends on when the object is re-associated by the tracker, but it would be at most the max shadow tracking age. So, if you visualize the object status on your display with the delay same as the max shadow tracking age, you would not have any blinking bbox issue.

But my question is how to make this delay? In plugin I have access only to current frame and current data. When I get past-frame data i.e. in 5th frame and this past-frame data relates to 4th frame I don’t have access to 4th frame because one step before I send it downstream. Should I store somewhere few last buffors (equal to max shadow age) and push it downstream only when it is older than max shadow age? I’m not sure how I should do it. And maybe it will be helpful, I don’t display output, I write it to new video file.

I don’t think this is currently supported by DeepStream unless you create a custom module to do this. @mchi Please help this customer in this direction if you can. Also, please create RFE about this so that DeepStream team can plan accordingly.

2 Likes

I hope there is different way than creating custom module. In the previous version it was one line in config file for tracker (minTrackingConfidenceDuringInactive) and now creating custom module. It would be really confusing.

The objects tracked in shadow mode may be false negatives, which we want to show (just like your preference); however, it could also be false positives, resulting in ghost tracking. This was the motivation behind deprecating minTrackingConfidenceDuringInactive. We will see if we can come up with a better idea to address this case and plan for the next releases. Stay tuned!

Hi @mchi,
Any update or some hints for me how to use past-frame date to sustain bbox?

Hello, anyone?

@aleksandra.osztynowicz1
As mentioned by @pshin, the parameter minTrackingConfidenceDuringInactive led to false positives. According to the NvDCF turning guide(NvDCF Parameter Tuning Guide — DeepStream 5.1 Release documentation), the tracker starts tracking in Active mode, but changes to Inactive if:

  1. The tracking confidence is lower than minTrackerConfidence or
  2. It is not matched with a detector bbox during data association.

It seems that your issue is the second one. Therefore you can rely on rely on user metadata generated by the tracker when it detected the object in previous frames(requires enable-past-frames = 1 and that the object was detected by the detector(and tracker matched with detector bbox) within the maxShadowTrackingAge.

According to the DS Plugin Development Guide:
https://docs.nvidia.com/metropolis/deepstream/5.0DP/plugin-manual/#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.02.html

https://docs.nvidia.com/metropolis/deepstream/5.0/dev-guide/DeepStream_Development_Guide/baggage/nvdstracker_8h.html

You can retrieve the past-frame data from the tracker plug-in using the function: NvMOT_ProcessPast() (described in depth at aforementioned link)

Then use the frameNum and tBbox to get the bounding box of an object from the tracker in the past frame(latest frame where the tracker was active or something else) to draw it.

If its not available through that metadata either, then you will want to tune the NvDCF tracker.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_NvDCF_parameter_tuning_guide.html

For more robust tracking, you may increase the value for maxShadowTrackingAge because it will allow an object to be re-associated even after missed detections over multiple consecutive frames and the tracker won’t terminate.

The official DS Doc on Tracker covers how to enable the past-frame data in depth. Please see Gst-nvtracker — DeepStream 5.1 Release documentation

“Past-frame” would be highlighted for your convenience, so hopefully you can find the relevant information easily.

@aleksandra.osztynowicz1

Looks like the upcoming DS 6.0 GA would have a lot less issue in sustaining bbox on display, even without using past-frame data. So, if you can wait until DS 6.0 GA release, you may not need to develop a custom plugin.

@dsingalNV
Thank you for reply,
I already know how to retrieve the past-frame data from tracker meta data but the problem is I get this data with delay, for example in buffor for frame number 6 I get past-frame data for frames 3-5, so I can’t draw this boxes on right frames because in that moment I have access only to 6th frame. @pshin mentioned that I have to make some delay to modify past frames but he didn’t tell me how to do it. So I will try with DS 6.0 but in the meantime maybe someone has an idea how to prepare custom library to make delay in processing frames. Thanks

@pshin could you tell me how in DS 6.0 it is possible to sustain bbox without past-frame data if detector doesn’t return bbox for few frames? Is there any new parameter for tracker or what? I got early access to DS 6.0 but it looks the same, if detecotr doesn’t return bbox, the tracker also doesn’t return bbox.

I was referring to DS 6.0 GA (instead of EA).