Custom Tracker Issue With Batch Processing

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a custom low level tracker library implementation for the gst-nvdstracker plugin. I have enabled/set pQuery->supportbatchprocessing = true in the NvMOT_Query() implementation. I have not implemented NvMOT_RemoveStreams() as it was optional and I didn’t have any resources internal to the tracker that I wanted to clear/clean up.

The tracker seems to work correctly with multiple input videos using the deepstream-app and a custom detector model when all sources of video are live/available. With input videos of varying lengths I’m observing a cross-talk of bounding boxes across the streams when 1 or more of the streams become inactive. For example, in a 4 stream application detection and tracking work normally when all 4 videos are alive. When video/stream 3 is finished but videos 1, 2, & 4 are still playing, I see that bounding boxes of stream 2 & 4 showing up in stream 1 along with it’s own boxes in the OSD display. I see similar behavior on streams 2 & 4 as well. I do not see this happening when I use pre-built trackers like IOU or KLT.

It seems like something is not right with my custom tracker API implementation. I have checked the forums for related issues and the sample code referenced in: Deepstream Tracker FAQ - #4 by bcao does not support batch processing for the tracker. Since there’s no official sample code for custom low level tracker implementation that supports batch processing any ideas/suggestions on how I can go about to fix my issue will be helpful.


Hey customer, it’s not easy to debug the issue inside your custom tracker libs.

  1. currently we don’t have a sample for how to customize low level tracker lib
  2. is it possible to use nvdcf tracker
  3. is it possible to provide your source code and repro steps with us if you really want us to help debug it, however I think you should expect some delay since it’s not easy to debug it

Thanks for your reply and suggestions. I’m not able to use any of the pre-built trackers (NvDCF, IOU, KLT) for the particular application I’m working on.

But as I mentioned in my post I do not see the issue with bounding boxes displayed in the wrong tile on the OSD with trackers provided by nvidia. Also, I see my custom tracker working correctly when all channels of the batch processing are available and the issue shows up only when one or more video streams are not available. This suggests that the issue is related to how the nvdstracker implementation is managing/updating NvMOTTrackedObjBatch which is then passed on by the nvtracker element to nvdsosd element down stream.

Since there’s no official sample for low level tracker lib available the forum posts have been very useful in trying to understand the details of implementation. The FAQ you have compiled was also quite helpful Deepstream Tracker FAQ - #4 by bcao . But the discussions have not dealt with the tracker supporting batch processing. Some insights/suggestions will be appreciated. Looking forward to hearing back.


Hello dilip.s,

Have you checked if the tracked object metadata in NvMOTTrackedObjBatch are all correct in terms of StreamID even when some streams are not available? I would suggest that you first make sure if the output metadata are all correct and matches with the expected results.

Thanks for the suggestion @pshin. The streamIDs did match between NvMOTTrackedObjBatch and NvMOTFrame.streamID in processParams.

After looking at the issue deeply I figured the problem was caused by the tracker library trying to store TrackedObjects of stream_i/frame_t in the NvMOTTrackedObjectBatch.list[i]. This works fine if all N/N streams in a batch are available. But if one or more streams are not available then this creates a mismatch of tracked objects between frame t & t+1 as nvtracker might use the buffers for any available stream. eg: NvMOTTrackedObjectBatch.list[i] can be used for Stream A in frame t and the same buffer can be used for Stream B in frame t+1 if stream A is not available.

I created local buffers to store the tracked objects list inside the custom low level library for all streams between frames t & t+1 and stored the results in the NvMOTTrackedObjectBatch.list[i] to be used downstream. This fixed my problem. Hope this information is useful to others who might be facing the same issue.


1 Like